00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3477 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3088 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.078 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.079 The recommended git tool is: git 00:00:00.079 using credential 00000000-0000-0000-0000-000000000002 00:00:00.082 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.152 Fetching changes from the remote Git repository 00:00:00.154 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.208 Using shallow fetch with depth 1 00:00:00.208 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.208 > git --version # timeout=10 00:00:00.235 > git --version # 'git version 2.39.2' 00:00:00.235 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.236 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.236 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.052 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.064 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.077 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:06.077 > git config core.sparsecheckout # timeout=10 00:00:06.089 > git read-tree -mu HEAD # timeout=10 00:00:06.106 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:06.124 Commit message: "inventory/dev: add missing long names" 00:00:06.124 > git rev-list --no-walk c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=10 00:00:06.211 [Pipeline] Start of Pipeline 00:00:06.225 [Pipeline] library 00:00:06.227 Loading library shm_lib@master 00:00:06.227 Library shm_lib@master is cached. Copying from home. 00:00:06.240 [Pipeline] node 00:00:21.241 Still waiting to schedule task 00:00:21.242 Waiting for next available executor on ‘vagrant-vm-host’ 00:07:01.130 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:07:01.132 [Pipeline] { 00:07:01.143 [Pipeline] catchError 00:07:01.145 [Pipeline] { 00:07:01.160 [Pipeline] wrap 00:07:01.172 [Pipeline] { 00:07:01.180 [Pipeline] stage 00:07:01.182 [Pipeline] { (Prologue) 00:07:01.201 [Pipeline] echo 00:07:01.202 Node: VM-host-SM16 00:07:01.208 [Pipeline] cleanWs 00:07:01.216 [WS-CLEANUP] Deleting project workspace... 00:07:01.216 [WS-CLEANUP] Deferred wipeout is used... 00:07:01.222 [WS-CLEANUP] done 00:07:01.409 [Pipeline] setCustomBuildProperty 00:07:01.482 [Pipeline] nodesByLabel 00:07:01.484 Found a total of 1 nodes with the 'sorcerer' label 00:07:01.493 [Pipeline] httpRequest 00:07:01.497 HttpMethod: GET 00:07:01.498 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:01.500 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:01.501 Response Code: HTTP/1.1 200 OK 00:07:01.502 Success: Status code 200 is in the accepted range: 200,404 00:07:01.502 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:01.641 [Pipeline] sh 00:07:01.920 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:07:01.940 [Pipeline] httpRequest 00:07:01.944 HttpMethod: GET 00:07:01.945 URL: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:07:01.945 Sending request to url: http://10.211.164.101/packages/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:07:01.946 Response Code: HTTP/1.1 200 OK 00:07:01.946 Success: Status code 200 is in the accepted range: 200,404 00:07:01.947 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:07:04.088 [Pipeline] sh 00:07:04.365 + tar --no-same-owner -xf spdk_253cca4fc3a89c38e79d2e940c5a0b7bb082afcc.tar.gz 00:07:07.702 [Pipeline] sh 00:07:08.028 + git -C spdk log --oneline -n5 00:07:08.029 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:07:08.029 c3870302f scripts/pkgdep: Fix install_shfmt() under FreeBSD 00:07:08.029 b65c4a87a scripts/pkgdep: Remove UADK from install_all_dependencies() 00:07:08.029 7a8d39909 Revert "test/common: Enable inherit_errexit" 00:07:08.029 4506c0c36 test/common: Enable inherit_errexit 00:07:08.047 [Pipeline] withCredentials 00:07:08.057 > git --version # timeout=10 00:07:08.069 > git --version # 'git version 2.39.2' 00:07:08.083 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:07:08.085 [Pipeline] { 00:07:08.095 [Pipeline] retry 00:07:08.098 [Pipeline] { 00:07:08.109 [Pipeline] sh 00:07:08.383 + git ls-remote http://dpdk.org/git/dpdk main 00:07:08.394 [Pipeline] } 00:07:08.415 [Pipeline] // retry 00:07:08.420 [Pipeline] } 00:07:08.441 [Pipeline] // withCredentials 00:07:08.452 [Pipeline] httpRequest 00:07:08.456 HttpMethod: GET 00:07:08.457 URL: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:07:08.458 Sending request to url: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:07:08.460 Response Code: HTTP/1.1 200 OK 00:07:08.461 Success: Status code 200 is in the accepted range: 200,404 00:07:08.461 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:07:09.537 [Pipeline] sh 00:07:09.813 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:07:11.227 [Pipeline] sh 00:07:11.535 + git -C dpdk log --oneline -n5 00:07:11.535 7e06c0de19 examples: move alignment attribute on types for MSVC 00:07:11.535 27595cd830 drivers: move alignment attribute on types for MSVC 00:07:11.535 0efea35a2b app: move alignment attribute on types for MSVC 00:07:11.535 e2e546ab5b version: 24.07-rc0 00:07:11.535 a9778aad62 version: 24.03.0 00:07:11.553 [Pipeline] writeFile 00:07:11.568 [Pipeline] sh 00:07:11.847 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:07:11.858 [Pipeline] sh 00:07:12.136 + cat autorun-spdk.conf 00:07:12.136 SPDK_RUN_FUNCTIONAL_TEST=1 00:07:12.136 SPDK_TEST_NVMF=1 00:07:12.136 SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:12.136 SPDK_TEST_USDT=1 00:07:12.136 SPDK_RUN_UBSAN=1 00:07:12.136 SPDK_TEST_NVMF_MDNS=1 00:07:12.136 NET_TYPE=virt 00:07:12.136 SPDK_JSONRPC_GO_CLIENT=1 00:07:12.136 SPDK_TEST_NATIVE_DPDK=main 00:07:12.136 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:07:12.136 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:12.142 RUN_NIGHTLY=1 00:07:12.144 [Pipeline] } 00:07:12.161 [Pipeline] // stage 00:07:12.176 [Pipeline] stage 00:07:12.178 [Pipeline] { (Run VM) 00:07:12.192 [Pipeline] sh 00:07:12.471 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:07:12.471 + echo 'Start stage prepare_nvme.sh' 00:07:12.471 Start stage prepare_nvme.sh 00:07:12.471 + [[ -n 3 ]] 00:07:12.471 + disk_prefix=ex3 00:07:12.471 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:07:12.471 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:07:12.471 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:07:12.471 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:07:12.471 ++ SPDK_TEST_NVMF=1 00:07:12.471 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:07:12.471 ++ SPDK_TEST_USDT=1 00:07:12.471 ++ SPDK_RUN_UBSAN=1 00:07:12.471 ++ SPDK_TEST_NVMF_MDNS=1 00:07:12.471 ++ NET_TYPE=virt 00:07:12.471 ++ SPDK_JSONRPC_GO_CLIENT=1 00:07:12.471 ++ SPDK_TEST_NATIVE_DPDK=main 00:07:12.471 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:07:12.471 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:07:12.471 ++ RUN_NIGHTLY=1 00:07:12.471 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:07:12.471 + nvme_files=() 00:07:12.471 + declare -A nvme_files 00:07:12.471 + backend_dir=/var/lib/libvirt/images/backends 00:07:12.471 + nvme_files['nvme.img']=5G 00:07:12.471 + nvme_files['nvme-cmb.img']=5G 00:07:12.471 + nvme_files['nvme-multi0.img']=4G 00:07:12.471 + nvme_files['nvme-multi1.img']=4G 00:07:12.471 + nvme_files['nvme-multi2.img']=4G 00:07:12.471 + nvme_files['nvme-openstack.img']=8G 00:07:12.471 + nvme_files['nvme-zns.img']=5G 00:07:12.471 + (( SPDK_TEST_NVME_PMR == 1 )) 00:07:12.471 + (( SPDK_TEST_FTL == 1 )) 00:07:12.471 + (( SPDK_TEST_NVME_FDP == 1 )) 00:07:12.471 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:07:12.471 + for nvme in "${!nvme_files[@]}" 00:07:12.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:07:12.471 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:07:12.471 + for nvme in "${!nvme_files[@]}" 00:07:12.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:07:12.471 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:07:12.471 + for nvme in "${!nvme_files[@]}" 00:07:12.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:07:12.471 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:07:12.471 + for nvme in "${!nvme_files[@]}" 00:07:12.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:07:12.471 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:07:12.471 + for nvme in "${!nvme_files[@]}" 00:07:12.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:07:12.471 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:07:12.471 + for nvme in "${!nvme_files[@]}" 00:07:12.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:07:12.471 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:07:12.471 + for nvme in "${!nvme_files[@]}" 00:07:12.471 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:07:12.471 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:07:12.471 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:07:12.472 + echo 'End stage prepare_nvme.sh' 00:07:12.472 End stage prepare_nvme.sh 00:07:12.483 [Pipeline] sh 00:07:12.765 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:07:12.765 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:07:12.765 00:07:12.765 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:07:12.765 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:07:12.765 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:07:12.765 HELP=0 00:07:12.765 DRY_RUN=0 00:07:12.765 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:07:12.765 NVME_DISKS_TYPE=nvme,nvme, 00:07:12.765 NVME_AUTO_CREATE=0 00:07:12.765 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:07:12.765 NVME_CMB=,, 00:07:12.765 NVME_PMR=,, 00:07:12.765 NVME_ZNS=,, 00:07:12.765 NVME_MS=,, 00:07:12.765 NVME_FDP=,, 00:07:12.765 SPDK_VAGRANT_DISTRO=fedora38 00:07:12.765 SPDK_VAGRANT_VMCPU=10 00:07:12.765 SPDK_VAGRANT_VMRAM=12288 00:07:12.765 SPDK_VAGRANT_PROVIDER=libvirt 00:07:12.765 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:07:12.765 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:07:12.765 SPDK_OPENSTACK_NETWORK=0 00:07:12.765 VAGRANT_PACKAGE_BOX=0 00:07:12.765 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:07:12.765 FORCE_DISTRO=true 00:07:12.765 VAGRANT_BOX_VERSION= 00:07:12.765 EXTRA_VAGRANTFILES= 00:07:12.765 NIC_MODEL=e1000 00:07:12.765 00:07:12.765 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:07:12.765 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:07:16.047 Bringing machine 'default' up with 'libvirt' provider... 00:07:16.614 ==> default: Creating image (snapshot of base box volume). 00:07:16.873 ==> default: Creating domain with the following settings... 00:07:16.873 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715779529_1c7043e539004ce40c76 00:07:16.873 ==> default: -- Domain type: kvm 00:07:16.873 ==> default: -- Cpus: 10 00:07:16.873 ==> default: -- Feature: acpi 00:07:16.873 ==> default: -- Feature: apic 00:07:16.873 ==> default: -- Feature: pae 00:07:16.873 ==> default: -- Memory: 12288M 00:07:16.873 ==> default: -- Memory Backing: hugepages: 00:07:16.873 ==> default: -- Management MAC: 00:07:16.873 ==> default: -- Loader: 00:07:16.873 ==> default: -- Nvram: 00:07:16.873 ==> default: -- Base box: spdk/fedora38 00:07:16.873 ==> default: -- Storage pool: default 00:07:16.873 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715779529_1c7043e539004ce40c76.img (20G) 00:07:16.873 ==> default: -- Volume Cache: default 00:07:16.873 ==> default: -- Kernel: 00:07:16.873 ==> default: -- Initrd: 00:07:16.873 ==> default: -- Graphics Type: vnc 00:07:16.873 ==> default: -- Graphics Port: -1 00:07:16.873 ==> default: -- Graphics IP: 127.0.0.1 00:07:16.873 ==> default: -- Graphics Password: Not defined 00:07:16.873 ==> default: -- Video Type: cirrus 00:07:16.873 ==> default: -- Video VRAM: 9216 00:07:16.873 ==> default: -- Sound Type: 00:07:16.873 ==> default: -- Keymap: en-us 00:07:16.873 ==> default: -- TPM Path: 00:07:16.873 ==> default: -- INPUT: type=mouse, bus=ps2 00:07:16.873 ==> default: -- Command line args: 00:07:16.873 ==> default: -> value=-device, 00:07:16.873 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:07:16.873 ==> default: -> value=-drive, 00:07:16.873 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:07:16.873 ==> default: -> value=-device, 00:07:16.873 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:16.873 ==> default: -> value=-device, 00:07:16.873 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:07:16.873 ==> default: -> value=-drive, 00:07:16.873 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:07:16.873 ==> default: -> value=-device, 00:07:16.873 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:16.873 ==> default: -> value=-drive, 00:07:16.873 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:07:16.873 ==> default: -> value=-device, 00:07:16.873 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:16.873 ==> default: -> value=-drive, 00:07:16.873 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:07:16.873 ==> default: -> value=-device, 00:07:16.873 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:07:17.131 ==> default: Creating shared folders metadata... 00:07:17.131 ==> default: Starting domain. 00:07:19.031 ==> default: Waiting for domain to get an IP address... 00:07:37.131 ==> default: Waiting for SSH to become available... 00:07:38.064 ==> default: Configuring and enabling network interfaces... 00:07:43.322 default: SSH address: 192.168.121.206:22 00:07:43.322 default: SSH username: vagrant 00:07:43.322 default: SSH auth method: private key 00:07:45.224 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:07:51.781 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:07:58.340 ==> default: Mounting SSHFS shared folder... 00:07:58.907 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:07:58.907 ==> default: Checking Mount.. 00:08:00.305 ==> default: Folder Successfully Mounted! 00:08:00.305 ==> default: Running provisioner: file... 00:08:01.237 default: ~/.gitconfig => .gitconfig 00:08:01.495 00:08:01.495 SUCCESS! 00:08:01.495 00:08:01.495 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:08:01.496 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:08:01.496 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:08:01.496 00:08:01.504 [Pipeline] } 00:08:01.522 [Pipeline] // stage 00:08:01.532 [Pipeline] dir 00:08:01.532 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:08:01.534 [Pipeline] { 00:08:01.549 [Pipeline] catchError 00:08:01.551 [Pipeline] { 00:08:01.564 [Pipeline] sh 00:08:01.845 + vagrant ssh-config --host vagrant 00:08:01.845 + sed -ne /^Host/,$p 00:08:01.845 + tee ssh_conf 00:08:06.034 Host vagrant 00:08:06.034 HostName 192.168.121.206 00:08:06.034 User vagrant 00:08:06.034 Port 22 00:08:06.034 UserKnownHostsFile /dev/null 00:08:06.034 StrictHostKeyChecking no 00:08:06.034 PasswordAuthentication no 00:08:06.034 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:08:06.034 IdentitiesOnly yes 00:08:06.034 LogLevel FATAL 00:08:06.034 ForwardAgent yes 00:08:06.034 ForwardX11 yes 00:08:06.034 00:08:06.047 [Pipeline] withEnv 00:08:06.049 [Pipeline] { 00:08:06.064 [Pipeline] sh 00:08:06.342 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:08:06.342 source /etc/os-release 00:08:06.342 [[ -e /image.version ]] && img=$(< /image.version) 00:08:06.342 # Minimal, systemd-like check. 00:08:06.342 if [[ -e /.dockerenv ]]; then 00:08:06.342 # Clear garbage from the node's name: 00:08:06.342 # agt-er_autotest_547-896 -> autotest_547-896 00:08:06.342 # $HOSTNAME is the actual container id 00:08:06.342 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:08:06.342 if mountpoint -q /etc/hostname; then 00:08:06.342 # We can assume this is a mount from a host where container is running, 00:08:06.342 # so fetch its hostname to easily identify the target swarm worker. 00:08:06.342 container="$(< /etc/hostname) ($agent)" 00:08:06.342 else 00:08:06.342 # Fallback 00:08:06.342 container=$agent 00:08:06.342 fi 00:08:06.342 fi 00:08:06.342 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:08:06.342 00:08:06.610 [Pipeline] } 00:08:06.631 [Pipeline] // withEnv 00:08:06.639 [Pipeline] setCustomBuildProperty 00:08:06.652 [Pipeline] stage 00:08:06.655 [Pipeline] { (Tests) 00:08:06.670 [Pipeline] sh 00:08:06.943 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:08:07.214 [Pipeline] timeout 00:08:07.214 Timeout set to expire in 40 min 00:08:07.216 [Pipeline] { 00:08:07.230 [Pipeline] sh 00:08:07.508 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:08:08.074 HEAD is now at 253cca4fc nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:08:08.086 [Pipeline] sh 00:08:08.370 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:08:08.641 [Pipeline] sh 00:08:08.917 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:08:09.189 [Pipeline] sh 00:08:09.465 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:08:09.723 ++ readlink -f spdk_repo 00:08:09.723 + DIR_ROOT=/home/vagrant/spdk_repo 00:08:09.723 + [[ -n /home/vagrant/spdk_repo ]] 00:08:09.723 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:08:09.723 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:08:09.723 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:08:09.723 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:08:09.723 + [[ -d /home/vagrant/spdk_repo/output ]] 00:08:09.723 + cd /home/vagrant/spdk_repo 00:08:09.723 + source /etc/os-release 00:08:09.723 ++ NAME='Fedora Linux' 00:08:09.723 ++ VERSION='38 (Cloud Edition)' 00:08:09.723 ++ ID=fedora 00:08:09.723 ++ VERSION_ID=38 00:08:09.723 ++ VERSION_CODENAME= 00:08:09.723 ++ PLATFORM_ID=platform:f38 00:08:09.723 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:08:09.723 ++ ANSI_COLOR='0;38;2;60;110;180' 00:08:09.723 ++ LOGO=fedora-logo-icon 00:08:09.723 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:08:09.723 ++ HOME_URL=https://fedoraproject.org/ 00:08:09.723 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:08:09.723 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:08:09.723 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:08:09.723 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:08:09.723 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:08:09.723 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:08:09.723 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:08:09.723 ++ SUPPORT_END=2024-05-14 00:08:09.723 ++ VARIANT='Cloud Edition' 00:08:09.723 ++ VARIANT_ID=cloud 00:08:09.723 + uname -a 00:08:09.723 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:08:09.723 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:09.980 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:09.980 Hugepages 00:08:09.980 node hugesize free / total 00:08:09.980 node0 1048576kB 0 / 0 00:08:10.239 node0 2048kB 0 / 0 00:08:10.239 00:08:10.239 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:10.239 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:10.239 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:08:10.239 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:08:10.239 + rm -f /tmp/spdk-ld-path 00:08:10.239 + source autorun-spdk.conf 00:08:10.239 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:08:10.239 ++ SPDK_TEST_NVMF=1 00:08:10.239 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:10.239 ++ SPDK_TEST_USDT=1 00:08:10.239 ++ SPDK_RUN_UBSAN=1 00:08:10.239 ++ SPDK_TEST_NVMF_MDNS=1 00:08:10.239 ++ NET_TYPE=virt 00:08:10.239 ++ SPDK_JSONRPC_GO_CLIENT=1 00:08:10.239 ++ SPDK_TEST_NATIVE_DPDK=main 00:08:10.239 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:08:10.239 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:10.239 ++ RUN_NIGHTLY=1 00:08:10.239 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:08:10.239 + [[ -n '' ]] 00:08:10.239 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:08:10.239 + for M in /var/spdk/build-*-manifest.txt 00:08:10.239 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:08:10.239 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:10.239 + for M in /var/spdk/build-*-manifest.txt 00:08:10.239 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:08:10.239 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:08:10.239 ++ uname 00:08:10.239 + [[ Linux == \L\i\n\u\x ]] 00:08:10.239 + sudo dmesg -T 00:08:10.239 + sudo dmesg --clear 00:08:10.239 + dmesg_pid=5997 00:08:10.239 + [[ Fedora Linux == FreeBSD ]] 00:08:10.239 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:10.239 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:10.239 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:08:10.239 + sudo dmesg -Tw 00:08:10.239 + [[ -x /usr/src/fio-static/fio ]] 00:08:10.239 + export FIO_BIN=/usr/src/fio-static/fio 00:08:10.239 + FIO_BIN=/usr/src/fio-static/fio 00:08:10.239 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:08:10.239 + [[ ! -v VFIO_QEMU_BIN ]] 00:08:10.239 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:08:10.239 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:10.239 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:10.239 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:08:10.239 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:10.239 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:10.239 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:08:10.239 Test configuration: 00:08:10.239 SPDK_RUN_FUNCTIONAL_TEST=1 00:08:10.239 SPDK_TEST_NVMF=1 00:08:10.239 SPDK_TEST_NVMF_TRANSPORT=tcp 00:08:10.239 SPDK_TEST_USDT=1 00:08:10.239 SPDK_RUN_UBSAN=1 00:08:10.239 SPDK_TEST_NVMF_MDNS=1 00:08:10.239 NET_TYPE=virt 00:08:10.239 SPDK_JSONRPC_GO_CLIENT=1 00:08:10.239 SPDK_TEST_NATIVE_DPDK=main 00:08:10.239 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:08:10.239 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:08:10.497 RUN_NIGHTLY=1 13:26:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.497 13:26:23 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:08:10.497 13:26:23 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.497 13:26:23 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.497 13:26:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.497 13:26:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.497 13:26:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.497 13:26:23 -- paths/export.sh@5 -- $ export PATH 00:08:10.497 13:26:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.497 13:26:23 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:08:10.497 13:26:23 -- common/autobuild_common.sh@437 -- $ date +%s 00:08:10.497 13:26:23 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715779583.XXXXXX 00:08:10.497 13:26:23 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715779583.wma2T7 00:08:10.497 13:26:23 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:08:10.497 13:26:23 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:08:10.497 13:26:23 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:08:10.497 13:26:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:08:10.497 13:26:23 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:08:10.497 13:26:23 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:08:10.497 13:26:23 -- common/autobuild_common.sh@453 -- $ get_config_params 00:08:10.497 13:26:23 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:08:10.497 13:26:23 -- common/autotest_common.sh@10 -- $ set +x 00:08:10.497 13:26:23 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:08:10.497 13:26:23 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:08:10.497 13:26:23 -- pm/common@17 -- $ local monitor 00:08:10.497 13:26:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:10.497 13:26:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:08:10.497 13:26:23 -- pm/common@25 -- $ sleep 1 00:08:10.497 13:26:23 -- pm/common@21 -- $ date +%s 00:08:10.497 13:26:23 -- pm/common@21 -- $ date +%s 00:08:10.497 13:26:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715779583 00:08:10.497 13:26:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715779583 00:08:10.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715779583_collect-vmstat.pm.log 00:08:10.497 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715779583_collect-cpu-load.pm.log 00:08:11.431 13:26:24 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:08:11.431 13:26:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:08:11.431 13:26:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:08:11.431 13:26:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:08:11.431 13:26:24 -- spdk/autobuild.sh@16 -- $ date -u 00:08:11.431 Wed May 15 01:26:24 PM UTC 2024 00:08:11.431 13:26:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:08:11.431 v24.05-pre-662-g253cca4fc 00:08:11.431 13:26:24 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:08:11.431 13:26:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:08:11.431 13:26:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:08:11.431 13:26:24 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:08:11.431 13:26:24 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:08:11.431 13:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:08:11.431 ************************************ 00:08:11.431 START TEST ubsan 00:08:11.431 ************************************ 00:08:11.431 using ubsan 00:08:11.431 13:26:24 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:08:11.431 00:08:11.431 real 0m0.000s 00:08:11.431 user 0m0.000s 00:08:11.431 sys 0m0.000s 00:08:11.431 13:26:24 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:08:11.431 ************************************ 00:08:11.431 13:26:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:08:11.431 END TEST ubsan 00:08:11.431 ************************************ 00:08:11.431 13:26:24 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:08:11.431 13:26:24 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:08:11.431 13:26:24 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:08:11.431 13:26:24 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:08:11.431 13:26:24 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:08:11.432 13:26:24 -- common/autotest_common.sh@10 -- $ set +x 00:08:11.432 ************************************ 00:08:11.432 START TEST build_native_dpdk 00:08:11.432 ************************************ 00:08:11.432 13:26:24 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:08:11.432 7e06c0de19 examples: move alignment attribute on types for MSVC 00:08:11.432 27595cd830 drivers: move alignment attribute on types for MSVC 00:08:11.432 0efea35a2b app: move alignment attribute on types for MSVC 00:08:11.432 e2e546ab5b version: 24.07-rc0 00:08:11.432 a9778aad62 version: 24.03.0 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:08:11.432 13:26:24 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:08:11.432 13:26:24 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:08:11.691 13:26:24 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:08:11.691 13:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:08:11.691 13:26:24 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:08:11.691 13:26:24 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:08:11.691 patching file config/rte_config.h 00:08:11.691 Hunk #1 succeeded at 70 (offset 11 lines). 00:08:11.691 13:26:24 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:08:11.691 13:26:24 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:08:11.691 13:26:24 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:08:11.691 13:26:24 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:08:11.691 13:26:24 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:08:16.957 The Meson build system 00:08:16.958 Version: 1.3.1 00:08:16.958 Source dir: /home/vagrant/spdk_repo/dpdk 00:08:16.958 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:08:16.958 Build type: native build 00:08:16.958 Program cat found: YES (/usr/bin/cat) 00:08:16.958 Project name: DPDK 00:08:16.958 Project version: 24.07.0-rc0 00:08:16.958 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:08:16.958 C linker for the host machine: gcc ld.bfd 2.39-16 00:08:16.958 Host machine cpu family: x86_64 00:08:16.958 Host machine cpu: x86_64 00:08:16.958 Message: ## Building in Developer Mode ## 00:08:16.958 Program pkg-config found: YES (/usr/bin/pkg-config) 00:08:16.958 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:08:16.958 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:08:16.958 Program python3 found: YES (/usr/bin/python3) 00:08:16.958 Program cat found: YES (/usr/bin/cat) 00:08:16.958 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:08:16.958 Compiler for C supports arguments -march=native: YES 00:08:16.958 Checking for size of "void *" : 8 00:08:16.958 Checking for size of "void *" : 8 (cached) 00:08:16.958 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:08:16.958 Library m found: YES 00:08:16.958 Library numa found: YES 00:08:16.958 Has header "numaif.h" : YES 00:08:16.958 Library fdt found: NO 00:08:16.958 Library execinfo found: NO 00:08:16.958 Has header "execinfo.h" : YES 00:08:16.958 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:08:16.958 Run-time dependency libarchive found: NO (tried pkgconfig) 00:08:16.958 Run-time dependency libbsd found: NO (tried pkgconfig) 00:08:16.958 Run-time dependency jansson found: NO (tried pkgconfig) 00:08:16.958 Run-time dependency openssl found: YES 3.0.9 00:08:16.958 Run-time dependency libpcap found: YES 1.10.4 00:08:16.958 Has header "pcap.h" with dependency libpcap: YES 00:08:16.958 Compiler for C supports arguments -Wcast-qual: YES 00:08:16.958 Compiler for C supports arguments -Wdeprecated: YES 00:08:16.958 Compiler for C supports arguments -Wformat: YES 00:08:16.958 Compiler for C supports arguments -Wformat-nonliteral: NO 00:08:16.958 Compiler for C supports arguments -Wformat-security: NO 00:08:16.958 Compiler for C supports arguments -Wmissing-declarations: YES 00:08:16.958 Compiler for C supports arguments -Wmissing-prototypes: YES 00:08:16.958 Compiler for C supports arguments -Wnested-externs: YES 00:08:16.958 Compiler for C supports arguments -Wold-style-definition: YES 00:08:16.958 Compiler for C supports arguments -Wpointer-arith: YES 00:08:16.958 Compiler for C supports arguments -Wsign-compare: YES 00:08:16.958 Compiler for C supports arguments -Wstrict-prototypes: YES 00:08:16.958 Compiler for C supports arguments -Wundef: YES 00:08:16.958 Compiler for C supports arguments -Wwrite-strings: YES 00:08:16.958 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:08:16.958 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:08:16.958 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:08:16.958 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:08:16.958 Program objdump found: YES (/usr/bin/objdump) 00:08:16.958 Compiler for C supports arguments -mavx512f: YES 00:08:16.958 Checking if "AVX512 checking" compiles: YES 00:08:16.958 Fetching value of define "__SSE4_2__" : 1 00:08:16.958 Fetching value of define "__AES__" : 1 00:08:16.958 Fetching value of define "__AVX__" : 1 00:08:16.958 Fetching value of define "__AVX2__" : 1 00:08:16.958 Fetching value of define "__AVX512BW__" : (undefined) 00:08:16.958 Fetching value of define "__AVX512CD__" : (undefined) 00:08:16.958 Fetching value of define "__AVX512DQ__" : (undefined) 00:08:16.958 Fetching value of define "__AVX512F__" : (undefined) 00:08:16.958 Fetching value of define "__AVX512VL__" : (undefined) 00:08:16.958 Fetching value of define "__PCLMUL__" : 1 00:08:16.958 Fetching value of define "__RDRND__" : 1 00:08:16.958 Fetching value of define "__RDSEED__" : 1 00:08:16.958 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:08:16.958 Compiler for C supports arguments -Wno-format-truncation: YES 00:08:16.958 Message: lib/log: Defining dependency "log" 00:08:16.958 Message: lib/kvargs: Defining dependency "kvargs" 00:08:16.958 Message: lib/argparse: Defining dependency "argparse" 00:08:16.958 Message: lib/telemetry: Defining dependency "telemetry" 00:08:16.958 Checking for function "getentropy" : NO 00:08:16.958 Message: lib/eal: Defining dependency "eal" 00:08:16.958 Message: lib/ring: Defining dependency "ring" 00:08:16.958 Message: lib/rcu: Defining dependency "rcu" 00:08:16.958 Message: lib/mempool: Defining dependency "mempool" 00:08:16.958 Message: lib/mbuf: Defining dependency "mbuf" 00:08:16.958 Fetching value of define "__PCLMUL__" : 1 (cached) 00:08:16.958 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:16.958 Compiler for C supports arguments -mpclmul: YES 00:08:16.958 Compiler for C supports arguments -maes: YES 00:08:16.958 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:16.958 Compiler for C supports arguments -mavx512bw: YES 00:08:16.958 Compiler for C supports arguments -mavx512dq: YES 00:08:16.958 Compiler for C supports arguments -mavx512vl: YES 00:08:16.958 Compiler for C supports arguments -mvpclmulqdq: YES 00:08:16.958 Compiler for C supports arguments -mavx2: YES 00:08:16.958 Compiler for C supports arguments -mavx: YES 00:08:16.958 Message: lib/net: Defining dependency "net" 00:08:16.958 Message: lib/meter: Defining dependency "meter" 00:08:16.958 Message: lib/ethdev: Defining dependency "ethdev" 00:08:16.958 Message: lib/pci: Defining dependency "pci" 00:08:16.958 Message: lib/cmdline: Defining dependency "cmdline" 00:08:16.958 Message: lib/metrics: Defining dependency "metrics" 00:08:16.958 Message: lib/hash: Defining dependency "hash" 00:08:16.958 Message: lib/timer: Defining dependency "timer" 00:08:16.958 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:16.958 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:08:16.958 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:08:16.958 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:08:16.958 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:08:16.958 Message: lib/acl: Defining dependency "acl" 00:08:16.958 Message: lib/bbdev: Defining dependency "bbdev" 00:08:16.958 Message: lib/bitratestats: Defining dependency "bitratestats" 00:08:16.958 Run-time dependency libelf found: YES 0.190 00:08:16.958 Message: lib/bpf: Defining dependency "bpf" 00:08:16.958 Message: lib/cfgfile: Defining dependency "cfgfile" 00:08:16.958 Message: lib/compressdev: Defining dependency "compressdev" 00:08:16.958 Message: lib/cryptodev: Defining dependency "cryptodev" 00:08:16.958 Message: lib/distributor: Defining dependency "distributor" 00:08:16.958 Message: lib/dmadev: Defining dependency "dmadev" 00:08:16.958 Message: lib/efd: Defining dependency "efd" 00:08:16.958 Message: lib/eventdev: Defining dependency "eventdev" 00:08:16.958 Message: lib/dispatcher: Defining dependency "dispatcher" 00:08:16.958 Message: lib/gpudev: Defining dependency "gpudev" 00:08:16.958 Message: lib/gro: Defining dependency "gro" 00:08:16.958 Message: lib/gso: Defining dependency "gso" 00:08:16.958 Message: lib/ip_frag: Defining dependency "ip_frag" 00:08:16.958 Message: lib/jobstats: Defining dependency "jobstats" 00:08:16.958 Message: lib/latencystats: Defining dependency "latencystats" 00:08:16.958 Message: lib/lpm: Defining dependency "lpm" 00:08:16.958 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:16.958 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:08:16.958 Fetching value of define "__AVX512IFMA__" : (undefined) 00:08:16.958 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:08:16.958 Message: lib/member: Defining dependency "member" 00:08:16.958 Message: lib/pcapng: Defining dependency "pcapng" 00:08:16.958 Compiler for C supports arguments -Wno-cast-qual: YES 00:08:16.958 Message: lib/power: Defining dependency "power" 00:08:16.958 Message: lib/rawdev: Defining dependency "rawdev" 00:08:16.958 Message: lib/regexdev: Defining dependency "regexdev" 00:08:16.958 Message: lib/mldev: Defining dependency "mldev" 00:08:16.958 Message: lib/rib: Defining dependency "rib" 00:08:16.958 Message: lib/reorder: Defining dependency "reorder" 00:08:16.958 Message: lib/sched: Defining dependency "sched" 00:08:16.958 Message: lib/security: Defining dependency "security" 00:08:16.958 Message: lib/stack: Defining dependency "stack" 00:08:16.958 Has header "linux/userfaultfd.h" : YES 00:08:16.958 Has header "linux/vduse.h" : YES 00:08:16.958 Message: lib/vhost: Defining dependency "vhost" 00:08:16.958 Message: lib/ipsec: Defining dependency "ipsec" 00:08:16.958 Message: lib/pdcp: Defining dependency "pdcp" 00:08:16.958 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:16.958 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:08:16.958 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:08:16.958 Compiler for C supports arguments -mavx512bw: YES (cached) 00:08:16.958 Message: lib/fib: Defining dependency "fib" 00:08:16.958 Message: lib/port: Defining dependency "port" 00:08:16.958 Message: lib/pdump: Defining dependency "pdump" 00:08:16.958 Message: lib/table: Defining dependency "table" 00:08:16.959 Message: lib/pipeline: Defining dependency "pipeline" 00:08:16.959 Message: lib/graph: Defining dependency "graph" 00:08:16.959 Message: lib/node: Defining dependency "node" 00:08:16.959 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:08:16.959 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:08:16.959 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:08:18.334 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:08:18.334 Compiler for C supports arguments -Wno-sign-compare: YES 00:08:18.334 Compiler for C supports arguments -Wno-unused-value: YES 00:08:18.334 Compiler for C supports arguments -Wno-format: YES 00:08:18.334 Compiler for C supports arguments -Wno-format-security: YES 00:08:18.334 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:08:18.334 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:08:18.334 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:08:18.334 Compiler for C supports arguments -Wno-unused-parameter: YES 00:08:18.334 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:08:18.334 Compiler for C supports arguments -mavx512f: YES (cached) 00:08:18.334 Compiler for C supports arguments -mavx512bw: YES (cached) 00:08:18.334 Compiler for C supports arguments -march=skylake-avx512: YES 00:08:18.334 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:08:18.334 Has header "sys/epoll.h" : YES 00:08:18.334 Program doxygen found: YES (/usr/bin/doxygen) 00:08:18.334 Configuring doxy-api-html.conf using configuration 00:08:18.334 Configuring doxy-api-man.conf using configuration 00:08:18.334 Program mandb found: YES (/usr/bin/mandb) 00:08:18.334 Program sphinx-build found: NO 00:08:18.334 Configuring rte_build_config.h using configuration 00:08:18.334 Message: 00:08:18.334 ================= 00:08:18.334 Applications Enabled 00:08:18.334 ================= 00:08:18.334 00:08:18.334 apps: 00:08:18.334 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:08:18.334 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:08:18.334 test-pmd, test-regex, test-sad, test-security-perf, 00:08:18.334 00:08:18.334 Message: 00:08:18.334 ================= 00:08:18.334 Libraries Enabled 00:08:18.334 ================= 00:08:18.334 00:08:18.334 libs: 00:08:18.334 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:08:18.334 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:08:18.334 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:08:18.334 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:08:18.334 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:08:18.334 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:08:18.334 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:08:18.334 node, 00:08:18.334 00:08:18.334 Message: 00:08:18.334 =============== 00:08:18.334 Drivers Enabled 00:08:18.334 =============== 00:08:18.334 00:08:18.334 common: 00:08:18.334 00:08:18.334 bus: 00:08:18.334 pci, vdev, 00:08:18.334 mempool: 00:08:18.334 ring, 00:08:18.334 dma: 00:08:18.334 00:08:18.334 net: 00:08:18.334 i40e, 00:08:18.334 raw: 00:08:18.334 00:08:18.334 crypto: 00:08:18.334 00:08:18.334 compress: 00:08:18.334 00:08:18.334 regex: 00:08:18.334 00:08:18.334 ml: 00:08:18.334 00:08:18.334 vdpa: 00:08:18.334 00:08:18.334 event: 00:08:18.334 00:08:18.334 baseband: 00:08:18.334 00:08:18.334 gpu: 00:08:18.334 00:08:18.334 00:08:18.334 Message: 00:08:18.334 ================= 00:08:18.334 Content Skipped 00:08:18.334 ================= 00:08:18.334 00:08:18.334 apps: 00:08:18.334 00:08:18.334 libs: 00:08:18.334 00:08:18.334 drivers: 00:08:18.334 common/cpt: not in enabled drivers build config 00:08:18.334 common/dpaax: not in enabled drivers build config 00:08:18.334 common/iavf: not in enabled drivers build config 00:08:18.334 common/idpf: not in enabled drivers build config 00:08:18.334 common/ionic: not in enabled drivers build config 00:08:18.334 common/mvep: not in enabled drivers build config 00:08:18.334 common/octeontx: not in enabled drivers build config 00:08:18.334 bus/auxiliary: not in enabled drivers build config 00:08:18.334 bus/cdx: not in enabled drivers build config 00:08:18.334 bus/dpaa: not in enabled drivers build config 00:08:18.334 bus/fslmc: not in enabled drivers build config 00:08:18.334 bus/ifpga: not in enabled drivers build config 00:08:18.334 bus/platform: not in enabled drivers build config 00:08:18.334 bus/uacce: not in enabled drivers build config 00:08:18.334 bus/vmbus: not in enabled drivers build config 00:08:18.334 common/cnxk: not in enabled drivers build config 00:08:18.334 common/mlx5: not in enabled drivers build config 00:08:18.334 common/nfp: not in enabled drivers build config 00:08:18.334 common/nitrox: not in enabled drivers build config 00:08:18.334 common/qat: not in enabled drivers build config 00:08:18.334 common/sfc_efx: not in enabled drivers build config 00:08:18.334 mempool/bucket: not in enabled drivers build config 00:08:18.334 mempool/cnxk: not in enabled drivers build config 00:08:18.334 mempool/dpaa: not in enabled drivers build config 00:08:18.334 mempool/dpaa2: not in enabled drivers build config 00:08:18.334 mempool/octeontx: not in enabled drivers build config 00:08:18.334 mempool/stack: not in enabled drivers build config 00:08:18.334 dma/cnxk: not in enabled drivers build config 00:08:18.334 dma/dpaa: not in enabled drivers build config 00:08:18.334 dma/dpaa2: not in enabled drivers build config 00:08:18.334 dma/hisilicon: not in enabled drivers build config 00:08:18.334 dma/idxd: not in enabled drivers build config 00:08:18.334 dma/ioat: not in enabled drivers build config 00:08:18.334 dma/skeleton: not in enabled drivers build config 00:08:18.334 net/af_packet: not in enabled drivers build config 00:08:18.334 net/af_xdp: not in enabled drivers build config 00:08:18.334 net/ark: not in enabled drivers build config 00:08:18.334 net/atlantic: not in enabled drivers build config 00:08:18.334 net/avp: not in enabled drivers build config 00:08:18.334 net/axgbe: not in enabled drivers build config 00:08:18.334 net/bnx2x: not in enabled drivers build config 00:08:18.334 net/bnxt: not in enabled drivers build config 00:08:18.334 net/bonding: not in enabled drivers build config 00:08:18.334 net/cnxk: not in enabled drivers build config 00:08:18.334 net/cpfl: not in enabled drivers build config 00:08:18.334 net/cxgbe: not in enabled drivers build config 00:08:18.334 net/dpaa: not in enabled drivers build config 00:08:18.334 net/dpaa2: not in enabled drivers build config 00:08:18.334 net/e1000: not in enabled drivers build config 00:08:18.334 net/ena: not in enabled drivers build config 00:08:18.334 net/enetc: not in enabled drivers build config 00:08:18.334 net/enetfec: not in enabled drivers build config 00:08:18.334 net/enic: not in enabled drivers build config 00:08:18.334 net/failsafe: not in enabled drivers build config 00:08:18.334 net/fm10k: not in enabled drivers build config 00:08:18.334 net/gve: not in enabled drivers build config 00:08:18.334 net/hinic: not in enabled drivers build config 00:08:18.334 net/hns3: not in enabled drivers build config 00:08:18.334 net/iavf: not in enabled drivers build config 00:08:18.334 net/ice: not in enabled drivers build config 00:08:18.334 net/idpf: not in enabled drivers build config 00:08:18.334 net/igc: not in enabled drivers build config 00:08:18.334 net/ionic: not in enabled drivers build config 00:08:18.334 net/ipn3ke: not in enabled drivers build config 00:08:18.334 net/ixgbe: not in enabled drivers build config 00:08:18.334 net/mana: not in enabled drivers build config 00:08:18.334 net/memif: not in enabled drivers build config 00:08:18.334 net/mlx4: not in enabled drivers build config 00:08:18.334 net/mlx5: not in enabled drivers build config 00:08:18.334 net/mvneta: not in enabled drivers build config 00:08:18.334 net/mvpp2: not in enabled drivers build config 00:08:18.334 net/netvsc: not in enabled drivers build config 00:08:18.334 net/nfb: not in enabled drivers build config 00:08:18.334 net/nfp: not in enabled drivers build config 00:08:18.334 net/ngbe: not in enabled drivers build config 00:08:18.334 net/null: not in enabled drivers build config 00:08:18.335 net/octeontx: not in enabled drivers build config 00:08:18.335 net/octeon_ep: not in enabled drivers build config 00:08:18.335 net/pcap: not in enabled drivers build config 00:08:18.335 net/pfe: not in enabled drivers build config 00:08:18.335 net/qede: not in enabled drivers build config 00:08:18.335 net/ring: not in enabled drivers build config 00:08:18.335 net/sfc: not in enabled drivers build config 00:08:18.335 net/softnic: not in enabled drivers build config 00:08:18.335 net/tap: not in enabled drivers build config 00:08:18.335 net/thunderx: not in enabled drivers build config 00:08:18.335 net/txgbe: not in enabled drivers build config 00:08:18.335 net/vdev_netvsc: not in enabled drivers build config 00:08:18.335 net/vhost: not in enabled drivers build config 00:08:18.335 net/virtio: not in enabled drivers build config 00:08:18.335 net/vmxnet3: not in enabled drivers build config 00:08:18.335 raw/cnxk_bphy: not in enabled drivers build config 00:08:18.335 raw/cnxk_gpio: not in enabled drivers build config 00:08:18.335 raw/dpaa2_cmdif: not in enabled drivers build config 00:08:18.335 raw/ifpga: not in enabled drivers build config 00:08:18.335 raw/ntb: not in enabled drivers build config 00:08:18.335 raw/skeleton: not in enabled drivers build config 00:08:18.335 crypto/armv8: not in enabled drivers build config 00:08:18.335 crypto/bcmfs: not in enabled drivers build config 00:08:18.335 crypto/caam_jr: not in enabled drivers build config 00:08:18.335 crypto/ccp: not in enabled drivers build config 00:08:18.335 crypto/cnxk: not in enabled drivers build config 00:08:18.335 crypto/dpaa_sec: not in enabled drivers build config 00:08:18.335 crypto/dpaa2_sec: not in enabled drivers build config 00:08:18.335 crypto/ipsec_mb: not in enabled drivers build config 00:08:18.335 crypto/mlx5: not in enabled drivers build config 00:08:18.335 crypto/mvsam: not in enabled drivers build config 00:08:18.335 crypto/nitrox: not in enabled drivers build config 00:08:18.335 crypto/null: not in enabled drivers build config 00:08:18.335 crypto/octeontx: not in enabled drivers build config 00:08:18.335 crypto/openssl: not in enabled drivers build config 00:08:18.335 crypto/scheduler: not in enabled drivers build config 00:08:18.335 crypto/uadk: not in enabled drivers build config 00:08:18.335 crypto/virtio: not in enabled drivers build config 00:08:18.335 compress/isal: not in enabled drivers build config 00:08:18.335 compress/mlx5: not in enabled drivers build config 00:08:18.335 compress/nitrox: not in enabled drivers build config 00:08:18.335 compress/octeontx: not in enabled drivers build config 00:08:18.335 compress/zlib: not in enabled drivers build config 00:08:18.335 regex/mlx5: not in enabled drivers build config 00:08:18.335 regex/cn9k: not in enabled drivers build config 00:08:18.335 ml/cnxk: not in enabled drivers build config 00:08:18.335 vdpa/ifc: not in enabled drivers build config 00:08:18.335 vdpa/mlx5: not in enabled drivers build config 00:08:18.335 vdpa/nfp: not in enabled drivers build config 00:08:18.335 vdpa/sfc: not in enabled drivers build config 00:08:18.335 event/cnxk: not in enabled drivers build config 00:08:18.335 event/dlb2: not in enabled drivers build config 00:08:18.335 event/dpaa: not in enabled drivers build config 00:08:18.335 event/dpaa2: not in enabled drivers build config 00:08:18.335 event/dsw: not in enabled drivers build config 00:08:18.335 event/opdl: not in enabled drivers build config 00:08:18.335 event/skeleton: not in enabled drivers build config 00:08:18.335 event/sw: not in enabled drivers build config 00:08:18.335 event/octeontx: not in enabled drivers build config 00:08:18.335 baseband/acc: not in enabled drivers build config 00:08:18.335 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:08:18.335 baseband/fpga_lte_fec: not in enabled drivers build config 00:08:18.335 baseband/la12xx: not in enabled drivers build config 00:08:18.335 baseband/null: not in enabled drivers build config 00:08:18.335 baseband/turbo_sw: not in enabled drivers build config 00:08:18.335 gpu/cuda: not in enabled drivers build config 00:08:18.335 00:08:18.335 00:08:18.335 Build targets in project: 224 00:08:18.335 00:08:18.335 DPDK 24.07.0-rc0 00:08:18.335 00:08:18.335 User defined options 00:08:18.335 libdir : lib 00:08:18.335 prefix : /home/vagrant/spdk_repo/dpdk/build 00:08:18.335 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:08:18.335 c_link_args : 00:08:18.335 enable_docs : false 00:08:18.335 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:08:18.335 enable_kmods : false 00:08:18.335 machine : native 00:08:18.335 tests : false 00:08:18.335 00:08:18.335 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:08:18.335 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:08:18.335 13:26:31 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:08:18.593 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:08:18.593 [1/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:08:18.593 [2/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:08:18.593 [3/722] Linking static target lib/librte_kvargs.a 00:08:18.593 [4/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:08:18.851 [5/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:08:18.851 [6/722] Linking static target lib/librte_log.a 00:08:18.851 [7/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:08:18.851 [8/722] Linking static target lib/librte_argparse.a 00:08:18.851 [9/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.108 [10/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:08:19.108 [11/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.108 [12/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:08:19.108 [13/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:08:19.108 [14/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:08:19.108 [15/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:08:19.108 [16/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:08:19.366 [17/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:08:19.366 [18/722] Linking target lib/librte_log.so.24.2 00:08:19.366 [19/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:08:19.366 [20/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:08:19.624 [21/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:08:19.624 [22/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:08:19.624 [23/722] Linking target lib/librte_kvargs.so.24.2 00:08:19.624 [24/722] Linking target lib/librte_argparse.so.24.2 00:08:19.882 [25/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:08:19.882 [26/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:08:19.882 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:08:19.882 [28/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:08:19.882 [29/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:08:19.882 [30/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:08:19.882 [31/722] Linking static target lib/librte_telemetry.a 00:08:19.882 [32/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:08:19.882 [33/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:08:20.140 [34/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:08:20.140 [35/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:08:20.398 [36/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:08:20.398 [37/722] Linking target lib/librte_telemetry.so.24.2 00:08:20.398 [38/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:08:20.398 [39/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:08:20.398 [40/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:08:20.398 [41/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:08:20.398 [42/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:08:20.398 [43/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:08:20.398 [44/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:08:20.668 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:08:20.668 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:08:20.668 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:08:20.668 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:08:20.940 [49/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:08:20.940 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:08:20.940 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:08:21.198 [52/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:08:21.198 [53/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:08:21.198 [54/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:08:21.198 [55/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:08:21.198 [56/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:08:21.456 [57/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:08:21.456 [58/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:08:21.456 [59/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:08:21.456 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:08:21.714 [61/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:08:21.714 [62/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:08:21.714 [63/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:08:21.714 [64/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:08:21.972 [65/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:08:21.972 [66/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:08:21.972 [67/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:08:21.972 [68/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:08:21.972 [69/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:08:21.972 [70/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:08:22.231 [71/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:08:22.489 [72/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:08:22.489 [73/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:08:22.489 [74/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:08:22.489 [75/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:08:22.489 [76/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:08:22.489 [77/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:08:22.489 [78/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:08:22.747 [79/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:08:22.747 [80/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:08:22.747 [81/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:08:22.747 [82/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:08:23.005 [83/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:08:23.005 [84/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:08:23.263 [85/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:08:23.263 [86/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:08:23.263 [87/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:08:23.263 [88/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:08:23.263 [89/722] Linking static target lib/librte_ring.a 00:08:23.521 [90/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:08:23.521 [91/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:08:23.521 [92/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:08:23.521 [93/722] Linking static target lib/librte_eal.a 00:08:23.779 [94/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:08:23.779 [95/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:08:23.779 [96/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:08:23.779 [97/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:08:24.037 [98/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:08:24.037 [99/722] Linking static target lib/librte_mempool.a 00:08:24.037 [100/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:08:24.037 [101/722] Linking static target lib/librte_rcu.a 00:08:24.037 [102/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:08:24.037 [103/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:08:24.295 [104/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.295 [105/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:08:24.553 [106/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:08:24.553 [107/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:08:24.553 [108/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:08:24.553 [109/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:08:24.553 [110/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:08:24.553 [111/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:08:24.811 [112/722] Linking static target lib/librte_mbuf.a 00:08:24.811 [113/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:08:24.811 [114/722] Linking static target lib/librte_net.a 00:08:24.811 [115/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:08:24.811 [116/722] Linking static target lib/librte_meter.a 00:08:25.068 [117/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.068 [118/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:08:25.326 [119/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.326 [120/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:08:25.326 [121/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:25.326 [122/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:08:25.326 [123/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:08:25.890 [124/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:08:25.890 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:08:26.147 [126/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:08:26.405 [127/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:08:26.405 [128/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:08:26.405 [129/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:08:26.405 [130/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:08:26.405 [131/722] Linking static target lib/librte_pci.a 00:08:26.405 [132/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:08:26.663 [133/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:08:26.663 [134/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:08:26.663 [135/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:26.663 [136/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:08:26.663 [137/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:08:26.663 [138/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:08:26.663 [139/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:08:26.921 [140/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:08:26.921 [141/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:08:26.921 [142/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:08:26.921 [143/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:08:26.921 [144/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:08:26.921 [145/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:08:26.921 [146/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:08:26.921 [147/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:08:27.179 [148/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:08:27.179 [149/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:08:27.179 [150/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:08:27.179 [151/722] Linking static target lib/librte_cmdline.a 00:08:27.437 [152/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:08:27.694 [153/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:08:27.694 [154/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:08:27.695 [155/722] Linking static target lib/librte_metrics.a 00:08:27.695 [156/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:08:27.695 [157/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:08:27.952 [158/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.210 [159/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.210 [160/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:08:28.494 [161/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:08:28.494 [162/722] Linking static target lib/librte_timer.a 00:08:28.781 [163/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:08:28.781 [164/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:08:29.039 [165/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:08:29.039 [166/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:08:29.297 [167/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:08:29.556 [168/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:08:29.556 [169/722] Linking static target lib/librte_ethdev.a 00:08:29.813 [170/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:08:29.813 [171/722] Linking static target lib/librte_bitratestats.a 00:08:29.813 [172/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:08:29.813 [173/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:08:29.813 [174/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:08:29.813 [175/722] Linking static target lib/librte_hash.a 00:08:29.813 [176/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:08:30.070 [177/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.070 [178/722] Linking target lib/librte_eal.so.24.2 00:08:30.071 [179/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:08:30.071 [180/722] Linking static target lib/librte_bbdev.a 00:08:30.071 [181/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:08:30.071 [182/722] Linking target lib/librte_ring.so.24.2 00:08:30.328 [183/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:08:30.328 [184/722] Linking target lib/librte_rcu.so.24.2 00:08:30.328 [185/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:08:30.328 [186/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:08:30.328 [187/722] Linking target lib/librte_mempool.so.24.2 00:08:30.328 [188/722] Linking target lib/librte_meter.so.24.2 00:08:30.585 [189/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:08:30.585 [190/722] Linking target lib/librte_pci.so.24.2 00:08:30.585 [191/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:08:30.585 [192/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.585 [193/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:08:30.585 [194/722] Linking static target lib/acl/libavx2_tmp.a 00:08:30.585 [195/722] Linking target lib/librte_timer.so.24.2 00:08:30.585 [196/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:08:30.585 [197/722] Linking target lib/librte_mbuf.so.24.2 00:08:30.585 [198/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:08:30.585 [199/722] Linking static target lib/acl/libavx512_tmp.a 00:08:30.585 [200/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:08:30.585 [201/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:30.843 [202/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:08:30.843 [203/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:08:30.843 [204/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:08:30.843 [205/722] Linking target lib/librte_bbdev.so.24.2 00:08:30.843 [206/722] Linking target lib/librte_net.so.24.2 00:08:30.843 [207/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:08:30.843 [208/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:08:30.843 [209/722] Linking target lib/librte_cmdline.so.24.2 00:08:30.843 [210/722] Linking target lib/librte_hash.so.24.2 00:08:31.102 [211/722] Linking static target lib/librte_acl.a 00:08:31.102 [212/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:08:31.102 [213/722] Linking static target lib/librte_cfgfile.a 00:08:31.102 [214/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:08:31.102 [215/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:08:31.361 [216/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:08:31.361 [217/722] Linking target lib/librte_acl.so.24.2 00:08:31.361 [218/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:08:31.361 [219/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:08:31.361 [220/722] Linking target lib/librte_cfgfile.so.24.2 00:08:31.619 [221/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:08:31.619 [222/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:08:31.619 [223/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:08:31.877 [224/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:08:31.877 [225/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:08:31.878 [226/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:08:32.136 [227/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:08:32.136 [228/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:08:32.136 [229/722] Linking static target lib/librte_bpf.a 00:08:32.136 [230/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:08:32.136 [231/722] Linking static target lib/librte_compressdev.a 00:08:32.447 [232/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:08:32.447 [233/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:08:32.447 [234/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.719 [235/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:08:32.719 [236/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:08:32.719 [237/722] Linking static target lib/librte_distributor.a 00:08:32.719 [238/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.719 [239/722] Linking target lib/librte_compressdev.so.24.2 00:08:32.719 [240/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:08:32.977 [241/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:08:32.977 [242/722] Linking target lib/librte_distributor.so.24.2 00:08:32.977 [243/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:08:32.977 [244/722] Linking static target lib/librte_dmadev.a 00:08:32.977 [245/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:08:33.542 [246/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:08:33.542 [247/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:33.542 [248/722] Linking target lib/librte_dmadev.so.24.2 00:08:33.543 [249/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:08:33.801 [250/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:08:33.801 [251/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:08:33.801 [252/722] Linking static target lib/librte_efd.a 00:08:33.801 [253/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:08:34.058 [254/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:08:34.059 [255/722] Linking static target lib/librte_cryptodev.a 00:08:34.059 [256/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:08:34.317 [257/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.317 [258/722] Linking target lib/librte_efd.so.24.2 00:08:34.575 [259/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:08:34.575 [260/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:08:34.575 [261/722] Linking static target lib/librte_dispatcher.a 00:08:34.575 [262/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:34.833 [263/722] Linking target lib/librte_ethdev.so.24.2 00:08:34.833 [264/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:08:34.833 [265/722] Linking static target lib/librte_gpudev.a 00:08:34.833 [266/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:08:34.833 [267/722] Linking target lib/librte_metrics.so.24.2 00:08:34.833 [268/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:08:35.090 [269/722] Linking target lib/librte_bpf.so.24.2 00:08:35.090 [270/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.090 [271/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:08:35.090 [272/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:08:35.090 [273/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:08:35.090 [274/722] Linking target lib/librte_bitratestats.so.24.2 00:08:35.090 [275/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:08:35.348 [276/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:08:35.348 [277/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.348 [278/722] Linking target lib/librte_cryptodev.so.24.2 00:08:35.605 [279/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:08:35.605 [280/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:08:35.605 [281/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:35.605 [282/722] Linking target lib/librte_gpudev.so.24.2 00:08:35.863 [283/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:08:35.863 [284/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:08:35.863 [285/722] Linking static target lib/librte_eventdev.a 00:08:35.863 [286/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:08:35.863 [287/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:08:35.863 [288/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:08:35.863 [289/722] Linking static target lib/librte_gro.a 00:08:35.863 [290/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:08:35.863 [291/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:08:36.122 [292/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.122 [293/722] Linking target lib/librte_gro.so.24.2 00:08:36.122 [294/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:08:36.381 [295/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:08:36.381 [296/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:08:36.381 [297/722] Linking static target lib/librte_gso.a 00:08:36.640 [298/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:08:36.641 [299/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:08:36.641 [300/722] Linking target lib/librte_gso.so.24.2 00:08:36.641 [301/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:08:36.641 [302/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:08:36.641 [303/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:08:36.901 [304/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:08:36.901 [305/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:08:36.901 [306/722] Linking static target lib/librte_jobstats.a 00:08:37.159 [307/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:08:37.159 [308/722] Linking static target lib/librte_ip_frag.a 00:08:37.159 [309/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:08:37.159 [310/722] Linking static target lib/librte_latencystats.a 00:08:37.159 [311/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.159 [312/722] Linking target lib/librte_jobstats.so.24.2 00:08:37.416 [313/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:08:37.417 [314/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.417 [315/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:08:37.417 [316/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:08:37.417 [317/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.417 [318/722] Linking target lib/librte_latencystats.so.24.2 00:08:37.417 [319/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:08:37.417 [320/722] Linking target lib/librte_ip_frag.so.24.2 00:08:37.417 [321/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:08:37.674 [322/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:08:37.674 [323/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:08:37.674 [324/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:08:37.933 [325/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:37.933 [326/722] Linking target lib/librte_eventdev.so.24.2 00:08:37.933 [327/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:08:37.933 [328/722] Linking static target lib/librte_lpm.a 00:08:37.933 [329/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:08:38.190 [330/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:08:38.190 [331/722] Linking target lib/librte_dispatcher.so.24.2 00:08:38.190 [332/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:08:38.190 [333/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:08:38.449 [334/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:08:38.449 [335/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:08:38.449 [336/722] Linking static target lib/librte_pcapng.a 00:08:38.449 [337/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.449 [338/722] Linking target lib/librte_lpm.so.24.2 00:08:38.449 [339/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:08:38.449 [340/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:08:38.449 [341/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:08:38.707 [342/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:08:38.707 [343/722] Linking target lib/librte_pcapng.so.24.2 00:08:38.707 [344/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:08:38.707 [345/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:08:38.707 [346/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:08:38.966 [347/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:08:38.966 [348/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:08:38.966 [349/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:08:38.966 [350/722] Linking static target lib/librte_member.a 00:08:38.966 [351/722] Linking static target lib/librte_power.a 00:08:38.966 [352/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:08:39.224 [353/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:08:39.224 [354/722] Linking static target lib/librte_rawdev.a 00:08:39.224 [355/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:08:39.224 [356/722] Linking static target lib/librte_regexdev.a 00:08:39.224 [357/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:08:39.483 [358/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.483 [359/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:08:39.483 [360/722] Linking target lib/librte_member.so.24.2 00:08:39.483 [361/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:08:39.483 [362/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:08:39.742 [363/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.742 [364/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:08:39.742 [365/722] Linking static target lib/librte_mldev.a 00:08:39.742 [366/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:39.742 [367/722] Linking target lib/librte_power.so.24.2 00:08:39.742 [368/722] Linking target lib/librte_rawdev.so.24.2 00:08:39.742 [369/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:08:40.000 [370/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:08:40.000 [371/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.000 [372/722] Linking target lib/librte_regexdev.so.24.2 00:08:40.258 [373/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:08:40.258 [374/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:08:40.258 [375/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:08:40.258 [376/722] Linking static target lib/librte_reorder.a 00:08:40.258 [377/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:08:40.258 [378/722] Linking static target lib/librte_rib.a 00:08:40.258 [379/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:08:40.258 [380/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:08:40.524 [381/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:08:40.524 [382/722] Linking static target lib/librte_stack.a 00:08:40.525 [383/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.525 [384/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:08:40.525 [385/722] Linking static target lib/librte_security.a 00:08:40.525 [386/722] Linking target lib/librte_reorder.so.24.2 00:08:40.785 [387/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:08:40.785 [388/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.785 [389/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:08:40.785 [390/722] Linking target lib/librte_rib.so.24.2 00:08:40.785 [391/722] Linking target lib/librte_stack.so.24.2 00:08:41.043 [392/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:08:41.043 [393/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:08:41.043 [394/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:08:41.043 [395/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:08:41.043 [396/722] Linking target lib/librte_security.so.24.2 00:08:41.302 [397/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:41.302 [398/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:08:41.302 [399/722] Linking target lib/librte_mldev.so.24.2 00:08:41.302 [400/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:08:41.302 [401/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:08:41.302 [402/722] Linking static target lib/librte_sched.a 00:08:41.868 [403/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:08:41.868 [404/722] Linking target lib/librte_sched.so.24.2 00:08:41.868 [405/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:08:41.868 [406/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:08:41.868 [407/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:08:42.126 [408/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:08:42.383 [409/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:08:42.383 [410/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:08:42.641 [411/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:08:42.641 [412/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:08:42.898 [413/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:08:42.899 [414/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:08:43.156 [415/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:08:43.156 [416/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:08:43.156 [417/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:08:43.414 [418/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:08:43.414 [419/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:08:43.414 [420/722] Linking static target lib/librte_ipsec.a 00:08:43.672 [421/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:08:43.672 [422/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:08:43.672 [423/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:08:43.672 [424/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:08:43.672 [425/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:08:43.672 [426/722] Linking target lib/librte_ipsec.so.24.2 00:08:43.672 [427/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:08:43.930 [428/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:08:43.930 [429/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:08:43.930 [430/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:08:43.930 [431/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:08:44.871 [432/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:08:44.871 [433/722] Linking static target lib/librte_pdcp.a 00:08:44.871 [434/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:08:44.871 [435/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:08:44.871 [436/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:08:44.871 [437/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:08:44.871 [438/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:08:44.871 [439/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:08:44.871 [440/722] Linking static target lib/librte_fib.a 00:08:45.129 [441/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.129 [442/722] Linking target lib/librte_pdcp.so.24.2 00:08:45.394 [443/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:08:45.394 [444/722] Linking target lib/librte_fib.so.24.2 00:08:45.394 [445/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:08:45.959 [446/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:08:45.959 [447/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:08:46.217 [448/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:08:46.217 [449/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:08:46.217 [450/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:08:46.217 [451/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:08:46.475 [452/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:08:46.475 [453/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:08:46.475 [454/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:08:46.732 [455/722] Linking static target lib/librte_port.a 00:08:46.989 [456/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:08:46.989 [457/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:08:46.989 [458/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:08:46.989 [459/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:08:47.246 [460/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.246 [461/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:08:47.246 [462/722] Linking target lib/librte_port.so.24.2 00:08:47.246 [463/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:08:47.246 [464/722] Linking static target lib/librte_pdump.a 00:08:47.246 [465/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:08:47.503 [466/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:08:47.503 [467/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:08:47.503 [468/722] Linking target lib/librte_pdump.so.24.2 00:08:47.503 [469/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:08:47.762 [470/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:08:47.762 [471/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:08:48.036 [472/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:08:48.036 [473/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:08:48.036 [474/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:08:48.305 [475/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:08:48.305 [476/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:08:48.305 [477/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:08:48.563 [478/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:08:48.563 [479/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:08:48.563 [480/722] Linking static target lib/librte_table.a 00:08:48.820 [481/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:08:49.082 [482/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:08:49.340 [483/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:08:49.340 [484/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:08:49.598 [485/722] Linking target lib/librte_table.so.24.2 00:08:49.598 [486/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:08:49.598 [487/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:08:49.855 [488/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:08:49.855 [489/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:08:49.855 [490/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:08:50.112 [491/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:08:50.370 [492/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:08:50.370 [493/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:08:50.370 [494/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:08:50.627 [495/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:08:50.627 [496/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:08:50.885 [497/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:08:50.885 [498/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:08:50.885 [499/722] Linking static target lib/librte_graph.a 00:08:51.143 [500/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:08:51.143 [501/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:08:51.464 [502/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:08:51.465 [503/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:08:51.465 [504/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:08:51.722 [505/722] Linking target lib/librte_graph.so.24.2 00:08:51.722 [506/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:08:51.981 [507/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:08:51.981 [508/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:08:52.239 [509/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:08:52.239 [510/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:08:52.239 [511/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:08:52.497 [512/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:08:52.497 [513/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:08:52.497 [514/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:08:52.755 [515/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:08:52.755 [516/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:08:53.012 [517/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:08:53.012 [518/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:08:53.271 [519/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:08:53.271 [520/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:08:53.271 [521/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:08:53.271 [522/722] Linking static target lib/librte_node.a 00:08:53.271 [523/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:08:53.271 [524/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:08:53.528 [525/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:08:53.528 [526/722] Linking target lib/librte_node.so.24.2 00:08:53.785 [527/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:08:53.785 [528/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:08:54.042 [529/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:08:54.042 [530/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:08:54.042 [531/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:08:54.042 [532/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:54.042 [533/722] Linking static target drivers/librte_bus_pci.a 00:08:54.300 [534/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:08:54.300 [535/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:08:54.300 [536/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:08:54.300 [537/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:54.300 [538/722] Linking static target drivers/librte_bus_vdev.a 00:08:54.300 [539/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:08:54.300 [540/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:08:54.300 [541/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:08:54.558 [542/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:08:54.558 [543/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:08:54.558 [544/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:08:54.558 [545/722] Linking target drivers/librte_bus_pci.so.24.2 00:08:54.558 [546/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:08:54.558 [547/722] Linking target drivers/librte_bus_vdev.so.24.2 00:08:54.815 [548/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:08:54.815 [549/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:08:54.815 [550/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:54.815 [551/722] Linking static target drivers/librte_mempool_ring.a 00:08:54.815 [552/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:08:54.815 [553/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:08:54.815 [554/722] Linking target drivers/librte_mempool_ring.so.24.2 00:08:55.097 [555/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:08:55.355 [556/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:08:55.614 [557/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:08:55.614 [558/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:08:55.614 [559/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:08:56.178 [560/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:08:56.743 [561/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:08:56.743 [562/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:08:56.743 [563/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:08:57.005 [564/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:08:57.005 [565/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:08:57.264 [566/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:08:57.264 [567/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:08:57.522 [568/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:08:57.522 [569/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:08:57.779 [570/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:08:57.779 [571/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:08:58.037 [572/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:08:58.294 [573/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:08:58.552 [574/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:08:58.552 [575/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:08:58.552 [576/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:08:59.114 [577/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:08:59.114 [578/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:08:59.114 [579/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:08:59.370 [580/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:08:59.370 [581/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:08:59.370 [582/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:08:59.370 [583/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:08:59.951 [584/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:08:59.951 [585/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:08:59.951 [586/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:08:59.951 [587/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:08:59.951 [588/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:08:59.951 [589/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:08:59.951 [590/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:08:59.951 [591/722] Linking static target lib/librte_vhost.a 00:09:00.528 [592/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:09:00.785 [593/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:09:00.785 [594/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:09:00.785 [595/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:09:00.785 [596/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:09:00.785 [597/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:09:00.785 [598/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:09:01.043 [599/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:09:01.043 [600/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:09:01.043 [601/722] Linking static target drivers/librte_net_i40e.a 00:09:01.302 [602/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:09:01.302 [603/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:09:01.302 [604/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:09:01.302 [605/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:09:01.560 [606/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:09:01.560 [607/722] Linking target lib/librte_vhost.so.24.2 00:09:01.560 [608/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:09:01.818 [609/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:09:01.818 [610/722] Linking target drivers/librte_net_i40e.so.24.2 00:09:02.075 [611/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:09:02.075 [612/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:09:02.332 [613/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:09:02.590 [614/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:09:02.590 [615/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:09:02.590 [616/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:09:02.590 [617/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:09:02.590 [618/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:09:03.155 [619/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:09:03.155 [620/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:09:03.412 [621/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:09:03.412 [622/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:09:03.412 [623/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:09:03.412 [624/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:09:03.412 [625/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:09:03.412 [626/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:09:03.670 [627/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:09:03.670 [628/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:09:03.928 [629/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:09:04.186 [630/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:09:04.443 [631/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:09:04.443 [632/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:09:04.443 [633/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:09:04.443 [634/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:09:05.431 [635/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:09:05.431 [636/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:09:05.709 [637/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:09:05.709 [638/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:09:05.709 [639/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:09:05.709 [640/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:09:06.038 [641/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:09:06.038 [642/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:09:06.038 [643/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:09:06.374 [644/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:09:06.374 [645/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:09:06.374 [646/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:09:06.653 [647/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:09:06.653 [648/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:09:06.653 [649/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:09:06.653 [650/722] Linking static target lib/librte_pipeline.a 00:09:06.653 [651/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:09:06.913 [652/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:09:06.913 [653/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:09:07.222 [654/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:09:07.222 [655/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:09:07.222 [656/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:09:07.222 [657/722] Linking target app/dpdk-dumpcap 00:09:07.507 [658/722] Linking target app/dpdk-graph 00:09:07.507 [659/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:09:07.507 [660/722] Linking target app/dpdk-pdump 00:09:07.507 [661/722] Linking target app/dpdk-proc-info 00:09:07.833 [662/722] Linking target app/dpdk-test-acl 00:09:07.833 [663/722] Linking target app/dpdk-test-cmdline 00:09:07.833 [664/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:09:07.833 [665/722] Linking target app/dpdk-test-compress-perf 00:09:07.833 [666/722] Linking target app/dpdk-test-crypto-perf 00:09:08.104 [667/722] Linking target app/dpdk-test-dma-perf 00:09:08.104 [668/722] Linking target app/dpdk-test-eventdev 00:09:08.104 [669/722] Linking target app/dpdk-test-fib 00:09:08.104 [670/722] Linking target app/dpdk-test-gpudev 00:09:08.361 [671/722] Linking target app/dpdk-test-flow-perf 00:09:08.361 [672/722] Linking target app/dpdk-test-bbdev 00:09:08.361 [673/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:09:08.618 [674/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:09:08.618 [675/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:09:08.876 [676/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:09:08.876 [677/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:09:08.876 [678/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:09:08.876 [679/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:09:08.876 [680/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:09:09.132 [681/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:09:09.132 [682/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:09:09.132 [683/722] Linking target app/dpdk-test-mldev 00:09:09.132 [684/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:09:09.697 [685/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:09:09.697 [686/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:09:09.697 [687/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:09:09.955 [688/722] Linking target lib/librte_pipeline.so.24.2 00:09:09.955 [689/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:09:09.955 [690/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:09:10.211 [691/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:09:10.211 [692/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:09:10.468 [693/722] Linking target app/dpdk-test-pipeline 00:09:10.468 [694/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:09:10.725 [695/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:09:10.725 [696/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:09:10.725 [697/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:09:10.983 [698/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:09:11.239 [699/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:09:11.495 [700/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:09:11.495 [701/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:09:11.495 [702/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:09:11.495 [703/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:09:11.753 [704/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:09:11.753 [705/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:09:12.319 [706/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:09:12.576 [707/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:09:12.576 [708/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:09:12.576 [709/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:09:12.834 [710/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:09:13.093 [711/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:09:13.093 [712/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:09:13.093 [713/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:09:13.093 [714/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:09:13.093 [715/722] Linking target app/dpdk-test-sad 00:09:13.360 [716/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:09:13.360 [717/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:09:13.360 [718/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:09:13.635 [719/722] Linking target app/dpdk-test-regex 00:09:13.893 [720/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:09:13.893 [721/722] Linking target app/dpdk-testpmd 00:09:14.460 [722/722] Linking target app/dpdk-test-security-perf 00:09:14.460 13:27:27 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:09:14.460 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:09:14.460 [0/1] Installing files. 00:09:14.722 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.722 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.723 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:14.724 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:09:14.724 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.724 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:14.725 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:15.292 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:15.292 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:15.292 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.292 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:09:15.292 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.292 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.293 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:09:15.294 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:09:15.294 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:09:15.294 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:09:15.294 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:09:15.294 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:09:15.294 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:09:15.294 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:09:15.294 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:09:15.294 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:09:15.294 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:09:15.294 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:09:15.294 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:09:15.294 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:09:15.294 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:09:15.294 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:09:15.294 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:09:15.294 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:09:15.294 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:09:15.294 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:09:15.294 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:09:15.294 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:09:15.294 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:09:15.294 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:09:15.294 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:09:15.294 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:09:15.294 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:09:15.294 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:09:15.294 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:09:15.294 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:09:15.294 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:09:15.294 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:09:15.294 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:09:15.294 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:09:15.294 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:09:15.294 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:09:15.294 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:09:15.294 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:09:15.294 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:09:15.294 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:09:15.294 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:09:15.294 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:09:15.294 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:09:15.294 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:09:15.294 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:09:15.294 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:09:15.294 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:09:15.294 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:09:15.294 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:09:15.294 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:09:15.294 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:09:15.294 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:09:15.294 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:09:15.294 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:09:15.294 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:09:15.294 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:09:15.294 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:09:15.294 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:09:15.294 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:09:15.294 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:09:15.294 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:09:15.294 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:09:15.294 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:09:15.294 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:09:15.294 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:09:15.294 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:09:15.294 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:09:15.294 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:09:15.294 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:09:15.294 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:09:15.294 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:09:15.294 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:09:15.294 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:09:15.294 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:09:15.294 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:09:15.294 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:09:15.294 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:09:15.294 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:09:15.294 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:09:15.294 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:09:15.294 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:09:15.294 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:09:15.294 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:09:15.294 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:09:15.294 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:09:15.294 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:09:15.294 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:09:15.294 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:09:15.294 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:09:15.294 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:09:15.294 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:09:15.294 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:09:15.294 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:09:15.294 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:09:15.294 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:09:15.294 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:09:15.294 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:09:15.294 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:09:15.294 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:09:15.294 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:09:15.294 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:09:15.294 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:09:15.294 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:09:15.294 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:09:15.294 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:09:15.294 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:09:15.294 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:09:15.294 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:09:15.294 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:09:15.294 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:09:15.294 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:09:15.294 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:09:15.294 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:09:15.294 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:09:15.294 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:09:15.294 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:09:15.294 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:09:15.294 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:09:15.294 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:09:15.294 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:09:15.294 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:09:15.294 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:09:15.294 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:09:15.294 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:09:15.294 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:09:15.294 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:09:15.294 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:09:15.294 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:09:15.294 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:09:15.294 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:09:15.294 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:09:15.294 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:09:15.294 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:09:15.294 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:09:15.294 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:09:15.294 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:09:15.294 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:09:15.294 13:27:28 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:09:15.294 13:27:28 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:09:15.294 13:27:28 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:09:15.294 13:27:28 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:09:15.294 00:09:15.295 real 1m3.801s 00:09:15.295 user 7m49.675s 00:09:15.295 sys 1m12.881s 00:09:15.295 13:27:28 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:09:15.295 13:27:28 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:09:15.295 ************************************ 00:09:15.295 END TEST build_native_dpdk 00:09:15.295 ************************************ 00:09:15.295 13:27:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:09:15.295 13:27:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:09:15.295 13:27:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:09:15.295 13:27:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:09:15.295 13:27:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:09:15.295 13:27:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:09:15.295 13:27:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:09:15.295 13:27:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:09:15.552 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:09:15.552 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:09:15.552 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:09:15.552 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:09:16.118 Using 'verbs' RDMA provider 00:09:29.277 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:09:44.147 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:09:44.147 go version go1.21.1 linux/amd64 00:09:44.147 Creating mk/config.mk...done. 00:09:44.147 Creating mk/cc.flags.mk...done. 00:09:44.147 Type 'make' to build. 00:09:44.147 13:27:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:09:44.147 13:27:55 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:09:44.147 13:27:55 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:09:44.147 13:27:55 -- common/autotest_common.sh@10 -- $ set +x 00:09:44.147 ************************************ 00:09:44.147 START TEST make 00:09:44.147 ************************************ 00:09:44.147 13:27:55 make -- common/autotest_common.sh@1121 -- $ make -j10 00:09:44.147 make[1]: Nothing to be done for 'all'. 00:10:10.669 CC lib/ut_mock/mock.o 00:10:10.669 CC lib/log/log.o 00:10:10.669 CC lib/log/log_deprecated.o 00:10:10.669 CC lib/log/log_flags.o 00:10:10.669 CC lib/ut/ut.o 00:10:10.669 LIB libspdk_ut_mock.a 00:10:10.669 LIB libspdk_log.a 00:10:10.669 SO libspdk_ut_mock.so.6.0 00:10:10.669 SO libspdk_log.so.7.0 00:10:10.669 LIB libspdk_ut.a 00:10:10.669 SYMLINK libspdk_ut_mock.so 00:10:10.669 SO libspdk_ut.so.2.0 00:10:10.669 SYMLINK libspdk_log.so 00:10:10.669 SYMLINK libspdk_ut.so 00:10:10.669 CC lib/dma/dma.o 00:10:10.669 CC lib/util/base64.o 00:10:10.669 CXX lib/trace_parser/trace.o 00:10:10.669 CC lib/ioat/ioat.o 00:10:10.669 CC lib/util/cpuset.o 00:10:10.669 CC lib/util/bit_array.o 00:10:10.669 CC lib/util/crc32.o 00:10:10.669 CC lib/util/crc16.o 00:10:10.669 CC lib/util/crc32c.o 00:10:10.669 CC lib/util/crc32_ieee.o 00:10:10.669 CC lib/vfio_user/host/vfio_user_pci.o 00:10:10.669 CC lib/util/crc64.o 00:10:10.669 CC lib/util/dif.o 00:10:10.669 LIB libspdk_dma.a 00:10:10.669 CC lib/util/fd.o 00:10:10.669 SO libspdk_dma.so.4.0 00:10:10.669 CC lib/util/file.o 00:10:10.669 CC lib/vfio_user/host/vfio_user.o 00:10:10.669 SYMLINK libspdk_dma.so 00:10:10.669 CC lib/util/hexlify.o 00:10:10.669 CC lib/util/iov.o 00:10:10.669 CC lib/util/math.o 00:10:10.669 CC lib/util/pipe.o 00:10:10.669 CC lib/util/strerror_tls.o 00:10:10.669 LIB libspdk_ioat.a 00:10:10.669 CC lib/util/string.o 00:10:10.669 SO libspdk_ioat.so.7.0 00:10:10.669 LIB libspdk_vfio_user.a 00:10:10.669 CC lib/util/uuid.o 00:10:10.669 CC lib/util/fd_group.o 00:10:10.669 SO libspdk_vfio_user.so.5.0 00:10:10.669 CC lib/util/xor.o 00:10:10.669 SYMLINK libspdk_ioat.so 00:10:10.669 CC lib/util/zipf.o 00:10:10.669 SYMLINK libspdk_vfio_user.so 00:10:10.927 LIB libspdk_util.a 00:10:10.927 SO libspdk_util.so.9.0 00:10:11.184 SYMLINK libspdk_util.so 00:10:11.184 LIB libspdk_trace_parser.a 00:10:11.184 SO libspdk_trace_parser.so.5.0 00:10:11.441 CC lib/conf/conf.o 00:10:11.441 CC lib/vmd/vmd.o 00:10:11.441 CC lib/vmd/led.o 00:10:11.441 CC lib/idxd/idxd.o 00:10:11.441 CC lib/idxd/idxd_user.o 00:10:11.441 CC lib/env_dpdk/env.o 00:10:11.441 CC lib/env_dpdk/memory.o 00:10:11.441 CC lib/json/json_parse.o 00:10:11.441 CC lib/rdma/common.o 00:10:11.441 SYMLINK libspdk_trace_parser.so 00:10:11.441 CC lib/rdma/rdma_verbs.o 00:10:11.441 CC lib/env_dpdk/pci.o 00:10:11.698 CC lib/json/json_util.o 00:10:11.698 CC lib/json/json_write.o 00:10:11.698 LIB libspdk_conf.a 00:10:11.698 CC lib/env_dpdk/init.o 00:10:11.698 LIB libspdk_rdma.a 00:10:11.698 SO libspdk_conf.so.6.0 00:10:11.698 SO libspdk_rdma.so.6.0 00:10:11.698 SYMLINK libspdk_conf.so 00:10:11.698 CC lib/env_dpdk/threads.o 00:10:11.698 SYMLINK libspdk_rdma.so 00:10:11.698 CC lib/env_dpdk/pci_ioat.o 00:10:11.955 CC lib/env_dpdk/pci_virtio.o 00:10:11.955 CC lib/env_dpdk/pci_vmd.o 00:10:11.955 LIB libspdk_idxd.a 00:10:11.955 CC lib/env_dpdk/pci_idxd.o 00:10:11.955 LIB libspdk_json.a 00:10:11.955 CC lib/env_dpdk/pci_event.o 00:10:11.955 SO libspdk_idxd.so.12.0 00:10:11.955 SO libspdk_json.so.6.0 00:10:11.955 SYMLINK libspdk_idxd.so 00:10:11.955 CC lib/env_dpdk/sigbus_handler.o 00:10:11.955 SYMLINK libspdk_json.so 00:10:11.955 CC lib/env_dpdk/pci_dpdk.o 00:10:11.955 CC lib/env_dpdk/pci_dpdk_2207.o 00:10:11.955 LIB libspdk_vmd.a 00:10:12.248 CC lib/env_dpdk/pci_dpdk_2211.o 00:10:12.248 SO libspdk_vmd.so.6.0 00:10:12.248 SYMLINK libspdk_vmd.so 00:10:12.248 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:10:12.248 CC lib/jsonrpc/jsonrpc_server.o 00:10:12.248 CC lib/jsonrpc/jsonrpc_client.o 00:10:12.248 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:10:12.518 LIB libspdk_jsonrpc.a 00:10:12.518 SO libspdk_jsonrpc.so.6.0 00:10:12.776 SYMLINK libspdk_jsonrpc.so 00:10:12.776 LIB libspdk_env_dpdk.a 00:10:13.034 CC lib/rpc/rpc.o 00:10:13.034 SO libspdk_env_dpdk.so.14.0 00:10:13.034 SYMLINK libspdk_env_dpdk.so 00:10:13.034 LIB libspdk_rpc.a 00:10:13.292 SO libspdk_rpc.so.6.0 00:10:13.292 SYMLINK libspdk_rpc.so 00:10:13.550 CC lib/trace/trace.o 00:10:13.550 CC lib/trace/trace_flags.o 00:10:13.550 CC lib/trace/trace_rpc.o 00:10:13.550 CC lib/notify/notify.o 00:10:13.550 CC lib/keyring/keyring.o 00:10:13.550 CC lib/keyring/keyring_rpc.o 00:10:13.550 CC lib/notify/notify_rpc.o 00:10:13.807 LIB libspdk_notify.a 00:10:13.807 LIB libspdk_trace.a 00:10:13.807 SO libspdk_notify.so.6.0 00:10:13.807 LIB libspdk_keyring.a 00:10:13.807 SO libspdk_trace.so.10.0 00:10:13.807 SO libspdk_keyring.so.1.0 00:10:13.807 SYMLINK libspdk_notify.so 00:10:13.807 SYMLINK libspdk_trace.so 00:10:13.807 SYMLINK libspdk_keyring.so 00:10:14.064 CC lib/thread/thread.o 00:10:14.064 CC lib/thread/iobuf.o 00:10:14.064 CC lib/sock/sock.o 00:10:14.064 CC lib/sock/sock_rpc.o 00:10:14.628 LIB libspdk_sock.a 00:10:14.628 SO libspdk_sock.so.9.0 00:10:14.628 SYMLINK libspdk_sock.so 00:10:14.885 CC lib/nvme/nvme_ctrlr_cmd.o 00:10:15.142 CC lib/nvme/nvme_ctrlr.o 00:10:15.142 CC lib/nvme/nvme_fabric.o 00:10:15.142 CC lib/nvme/nvme_ns_cmd.o 00:10:15.142 CC lib/nvme/nvme_ns.o 00:10:15.142 CC lib/nvme/nvme_pcie_common.o 00:10:15.142 CC lib/nvme/nvme_pcie.o 00:10:15.142 CC lib/nvme/nvme_qpair.o 00:10:15.142 CC lib/nvme/nvme.o 00:10:15.707 CC lib/nvme/nvme_quirks.o 00:10:15.707 LIB libspdk_thread.a 00:10:15.964 SO libspdk_thread.so.10.0 00:10:15.965 CC lib/nvme/nvme_transport.o 00:10:15.965 SYMLINK libspdk_thread.so 00:10:15.965 CC lib/nvme/nvme_discovery.o 00:10:15.965 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:10:15.965 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:10:16.222 CC lib/accel/accel.o 00:10:16.222 CC lib/nvme/nvme_tcp.o 00:10:16.222 CC lib/blob/blobstore.o 00:10:16.222 CC lib/blob/request.o 00:10:16.479 CC lib/blob/zeroes.o 00:10:16.479 CC lib/blob/blob_bs_dev.o 00:10:16.736 CC lib/accel/accel_rpc.o 00:10:16.736 CC lib/accel/accel_sw.o 00:10:16.736 CC lib/nvme/nvme_opal.o 00:10:16.736 CC lib/nvme/nvme_io_msg.o 00:10:16.736 CC lib/init/json_config.o 00:10:16.736 CC lib/nvme/nvme_poll_group.o 00:10:16.736 CC lib/nvme/nvme_zns.o 00:10:16.994 CC lib/nvme/nvme_stubs.o 00:10:17.251 LIB libspdk_accel.a 00:10:17.251 CC lib/init/subsystem.o 00:10:17.251 SO libspdk_accel.so.15.0 00:10:17.251 SYMLINK libspdk_accel.so 00:10:17.508 CC lib/init/subsystem_rpc.o 00:10:17.508 CC lib/nvme/nvme_auth.o 00:10:17.508 CC lib/nvme/nvme_cuse.o 00:10:17.508 CC lib/nvme/nvme_rdma.o 00:10:17.508 CC lib/virtio/virtio.o 00:10:17.508 CC lib/virtio/virtio_vhost_user.o 00:10:17.508 CC lib/virtio/virtio_vfio_user.o 00:10:17.508 CC lib/bdev/bdev.o 00:10:17.765 CC lib/init/rpc.o 00:10:17.765 CC lib/virtio/virtio_pci.o 00:10:17.765 LIB libspdk_init.a 00:10:17.765 CC lib/bdev/bdev_rpc.o 00:10:17.765 SO libspdk_init.so.5.0 00:10:18.022 CC lib/bdev/bdev_zone.o 00:10:18.022 SYMLINK libspdk_init.so 00:10:18.022 CC lib/bdev/part.o 00:10:18.022 LIB libspdk_virtio.a 00:10:18.280 SO libspdk_virtio.so.7.0 00:10:18.280 CC lib/bdev/scsi_nvme.o 00:10:18.280 SYMLINK libspdk_virtio.so 00:10:18.537 CC lib/event/log_rpc.o 00:10:18.537 CC lib/event/app.o 00:10:18.537 CC lib/event/reactor.o 00:10:18.537 CC lib/event/app_rpc.o 00:10:18.537 CC lib/event/scheduler_static.o 00:10:19.102 LIB libspdk_event.a 00:10:19.102 LIB libspdk_nvme.a 00:10:19.102 SO libspdk_event.so.13.0 00:10:19.102 SYMLINK libspdk_event.so 00:10:19.102 SO libspdk_nvme.so.13.0 00:10:19.359 SYMLINK libspdk_nvme.so 00:10:19.617 LIB libspdk_blob.a 00:10:19.875 SO libspdk_blob.so.11.0 00:10:19.875 SYMLINK libspdk_blob.so 00:10:20.132 CC lib/lvol/lvol.o 00:10:20.132 CC lib/blobfs/blobfs.o 00:10:20.132 CC lib/blobfs/tree.o 00:10:20.389 LIB libspdk_bdev.a 00:10:20.389 SO libspdk_bdev.so.15.0 00:10:20.646 SYMLINK libspdk_bdev.so 00:10:20.904 CC lib/ublk/ublk.o 00:10:20.904 CC lib/ublk/ublk_rpc.o 00:10:20.904 CC lib/nvmf/ctrlr.o 00:10:20.904 CC lib/nvmf/ctrlr_discovery.o 00:10:20.904 CC lib/nvmf/ctrlr_bdev.o 00:10:20.904 CC lib/nbd/nbd.o 00:10:20.904 CC lib/ftl/ftl_core.o 00:10:20.904 CC lib/scsi/dev.o 00:10:21.161 LIB libspdk_lvol.a 00:10:21.161 CC lib/scsi/lun.o 00:10:21.161 SO libspdk_lvol.so.10.0 00:10:21.161 LIB libspdk_blobfs.a 00:10:21.161 SO libspdk_blobfs.so.10.0 00:10:21.161 SYMLINK libspdk_lvol.so 00:10:21.161 CC lib/nvmf/subsystem.o 00:10:21.161 CC lib/nvmf/nvmf.o 00:10:21.161 SYMLINK libspdk_blobfs.so 00:10:21.161 CC lib/nvmf/nvmf_rpc.o 00:10:21.161 CC lib/nbd/nbd_rpc.o 00:10:21.417 CC lib/ftl/ftl_init.o 00:10:21.417 CC lib/scsi/port.o 00:10:21.417 CC lib/nvmf/transport.o 00:10:21.417 LIB libspdk_nbd.a 00:10:21.417 LIB libspdk_ublk.a 00:10:21.417 SO libspdk_nbd.so.7.0 00:10:21.417 SO libspdk_ublk.so.3.0 00:10:21.674 CC lib/nvmf/tcp.o 00:10:21.674 CC lib/scsi/scsi.o 00:10:21.674 CC lib/ftl/ftl_layout.o 00:10:21.674 SYMLINK libspdk_nbd.so 00:10:21.674 CC lib/ftl/ftl_debug.o 00:10:21.674 SYMLINK libspdk_ublk.so 00:10:21.674 CC lib/ftl/ftl_io.o 00:10:21.674 CC lib/scsi/scsi_bdev.o 00:10:21.931 CC lib/ftl/ftl_sb.o 00:10:21.931 CC lib/ftl/ftl_l2p.o 00:10:21.931 CC lib/scsi/scsi_pr.o 00:10:21.931 CC lib/scsi/scsi_rpc.o 00:10:21.931 CC lib/nvmf/stubs.o 00:10:22.189 CC lib/ftl/ftl_l2p_flat.o 00:10:22.189 CC lib/ftl/ftl_nv_cache.o 00:10:22.189 CC lib/scsi/task.o 00:10:22.189 CC lib/nvmf/mdns_server.o 00:10:22.189 CC lib/ftl/ftl_band.o 00:10:22.189 CC lib/ftl/ftl_band_ops.o 00:10:22.189 CC lib/nvmf/rdma.o 00:10:22.446 CC lib/nvmf/auth.o 00:10:22.446 LIB libspdk_scsi.a 00:10:22.446 CC lib/ftl/ftl_writer.o 00:10:22.446 SO libspdk_scsi.so.9.0 00:10:22.446 CC lib/ftl/ftl_rq.o 00:10:22.703 CC lib/ftl/ftl_reloc.o 00:10:22.703 SYMLINK libspdk_scsi.so 00:10:22.703 CC lib/ftl/ftl_l2p_cache.o 00:10:22.703 CC lib/ftl/ftl_p2l.o 00:10:22.703 CC lib/iscsi/conn.o 00:10:22.961 CC lib/iscsi/init_grp.o 00:10:22.961 CC lib/vhost/vhost.o 00:10:22.961 CC lib/vhost/vhost_rpc.o 00:10:23.219 CC lib/vhost/vhost_scsi.o 00:10:23.219 CC lib/ftl/mngt/ftl_mngt.o 00:10:23.219 CC lib/iscsi/iscsi.o 00:10:23.219 CC lib/vhost/vhost_blk.o 00:10:23.219 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:10:23.475 CC lib/iscsi/md5.o 00:10:23.475 CC lib/vhost/rte_vhost_user.o 00:10:23.475 CC lib/iscsi/param.o 00:10:23.475 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:10:23.475 CC lib/iscsi/portal_grp.o 00:10:23.733 CC lib/iscsi/tgt_node.o 00:10:23.733 CC lib/iscsi/iscsi_subsystem.o 00:10:23.733 CC lib/ftl/mngt/ftl_mngt_startup.o 00:10:23.733 CC lib/iscsi/iscsi_rpc.o 00:10:23.991 CC lib/ftl/mngt/ftl_mngt_md.o 00:10:23.991 CC lib/iscsi/task.o 00:10:23.991 CC lib/ftl/mngt/ftl_mngt_misc.o 00:10:23.991 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:10:23.991 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:10:24.249 CC lib/ftl/mngt/ftl_mngt_band.o 00:10:24.249 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:10:24.249 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:10:24.249 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:10:24.249 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:10:24.249 CC lib/ftl/utils/ftl_conf.o 00:10:24.249 CC lib/ftl/utils/ftl_md.o 00:10:24.506 LIB libspdk_nvmf.a 00:10:24.506 CC lib/ftl/utils/ftl_mempool.o 00:10:24.506 CC lib/ftl/utils/ftl_bitmap.o 00:10:24.506 CC lib/ftl/utils/ftl_property.o 00:10:24.506 SO libspdk_nvmf.so.18.0 00:10:24.506 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:10:24.506 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:10:24.506 LIB libspdk_vhost.a 00:10:24.506 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:10:24.506 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:10:24.764 LIB libspdk_iscsi.a 00:10:24.764 SO libspdk_vhost.so.8.0 00:10:24.764 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:10:24.764 SYMLINK libspdk_nvmf.so 00:10:24.764 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:10:24.764 SO libspdk_iscsi.so.8.0 00:10:24.764 CC lib/ftl/upgrade/ftl_sb_v3.o 00:10:24.764 SYMLINK libspdk_vhost.so 00:10:24.764 CC lib/ftl/upgrade/ftl_sb_v5.o 00:10:24.764 CC lib/ftl/nvc/ftl_nvc_dev.o 00:10:24.764 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:10:24.764 CC lib/ftl/base/ftl_base_dev.o 00:10:24.764 CC lib/ftl/base/ftl_base_bdev.o 00:10:24.764 CC lib/ftl/ftl_trace.o 00:10:25.021 SYMLINK libspdk_iscsi.so 00:10:25.021 LIB libspdk_ftl.a 00:10:25.277 SO libspdk_ftl.so.9.0 00:10:25.842 SYMLINK libspdk_ftl.so 00:10:26.099 CC module/env_dpdk/env_dpdk_rpc.o 00:10:26.099 CC module/accel/iaa/accel_iaa.o 00:10:26.099 CC module/sock/posix/posix.o 00:10:26.099 CC module/accel/ioat/accel_ioat.o 00:10:26.099 CC module/scheduler/dynamic/scheduler_dynamic.o 00:10:26.100 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:10:26.100 CC module/blob/bdev/blob_bdev.o 00:10:26.100 CC module/accel/dsa/accel_dsa.o 00:10:26.100 CC module/accel/error/accel_error.o 00:10:26.100 CC module/keyring/file/keyring.o 00:10:26.358 LIB libspdk_env_dpdk_rpc.a 00:10:26.358 SO libspdk_env_dpdk_rpc.so.6.0 00:10:26.358 LIB libspdk_scheduler_dpdk_governor.a 00:10:26.358 SYMLINK libspdk_env_dpdk_rpc.so 00:10:26.358 CC module/keyring/file/keyring_rpc.o 00:10:26.358 SO libspdk_scheduler_dpdk_governor.so.4.0 00:10:26.358 CC module/accel/ioat/accel_ioat_rpc.o 00:10:26.358 CC module/accel/error/accel_error_rpc.o 00:10:26.358 CC module/accel/iaa/accel_iaa_rpc.o 00:10:26.358 LIB libspdk_scheduler_dynamic.a 00:10:26.358 CC module/accel/dsa/accel_dsa_rpc.o 00:10:26.358 SYMLINK libspdk_scheduler_dpdk_governor.so 00:10:26.358 LIB libspdk_blob_bdev.a 00:10:26.358 SO libspdk_scheduler_dynamic.so.4.0 00:10:26.358 SO libspdk_blob_bdev.so.11.0 00:10:26.616 SYMLINK libspdk_scheduler_dynamic.so 00:10:26.616 LIB libspdk_keyring_file.a 00:10:26.616 LIB libspdk_accel_ioat.a 00:10:26.616 LIB libspdk_accel_error.a 00:10:26.616 LIB libspdk_accel_iaa.a 00:10:26.616 SYMLINK libspdk_blob_bdev.so 00:10:26.616 CC module/scheduler/gscheduler/gscheduler.o 00:10:26.616 SO libspdk_accel_ioat.so.6.0 00:10:26.616 SO libspdk_keyring_file.so.1.0 00:10:26.616 SO libspdk_accel_error.so.2.0 00:10:26.616 SO libspdk_accel_iaa.so.3.0 00:10:26.616 LIB libspdk_accel_dsa.a 00:10:26.616 SYMLINK libspdk_keyring_file.so 00:10:26.616 SYMLINK libspdk_accel_ioat.so 00:10:26.616 SYMLINK libspdk_accel_error.so 00:10:26.616 SO libspdk_accel_dsa.so.5.0 00:10:26.616 SYMLINK libspdk_accel_iaa.so 00:10:26.616 SYMLINK libspdk_accel_dsa.so 00:10:26.616 LIB libspdk_scheduler_gscheduler.a 00:10:26.616 SO libspdk_scheduler_gscheduler.so.4.0 00:10:26.874 CC module/bdev/delay/vbdev_delay.o 00:10:26.874 CC module/blobfs/bdev/blobfs_bdev.o 00:10:26.874 CC module/bdev/error/vbdev_error.o 00:10:26.874 CC module/bdev/gpt/gpt.o 00:10:26.874 SYMLINK libspdk_scheduler_gscheduler.so 00:10:26.874 CC module/bdev/lvol/vbdev_lvol.o 00:10:26.874 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:10:26.874 CC module/bdev/malloc/bdev_malloc.o 00:10:26.874 CC module/bdev/null/bdev_null.o 00:10:26.874 CC module/bdev/nvme/bdev_nvme.o 00:10:26.874 LIB libspdk_sock_posix.a 00:10:26.874 SO libspdk_sock_posix.so.6.0 00:10:26.874 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:10:27.132 CC module/bdev/gpt/vbdev_gpt.o 00:10:27.132 SYMLINK libspdk_sock_posix.so 00:10:27.132 CC module/bdev/null/bdev_null_rpc.o 00:10:27.132 CC module/bdev/error/vbdev_error_rpc.o 00:10:27.132 CC module/bdev/malloc/bdev_malloc_rpc.o 00:10:27.132 CC module/bdev/delay/vbdev_delay_rpc.o 00:10:27.132 CC module/bdev/nvme/bdev_nvme_rpc.o 00:10:27.132 LIB libspdk_blobfs_bdev.a 00:10:27.132 CC module/bdev/nvme/nvme_rpc.o 00:10:27.132 SO libspdk_blobfs_bdev.so.6.0 00:10:27.390 LIB libspdk_bdev_null.a 00:10:27.390 LIB libspdk_bdev_error.a 00:10:27.390 LIB libspdk_bdev_gpt.a 00:10:27.390 SO libspdk_bdev_null.so.6.0 00:10:27.390 SYMLINK libspdk_blobfs_bdev.so 00:10:27.390 SO libspdk_bdev_error.so.6.0 00:10:27.390 CC module/bdev/nvme/bdev_mdns_client.o 00:10:27.390 SO libspdk_bdev_gpt.so.6.0 00:10:27.390 LIB libspdk_bdev_malloc.a 00:10:27.390 LIB libspdk_bdev_delay.a 00:10:27.390 LIB libspdk_bdev_lvol.a 00:10:27.390 SYMLINK libspdk_bdev_null.so 00:10:27.390 SYMLINK libspdk_bdev_error.so 00:10:27.390 SO libspdk_bdev_malloc.so.6.0 00:10:27.390 SO libspdk_bdev_delay.so.6.0 00:10:27.390 SO libspdk_bdev_lvol.so.6.0 00:10:27.390 SYMLINK libspdk_bdev_gpt.so 00:10:27.648 SYMLINK libspdk_bdev_malloc.so 00:10:27.648 SYMLINK libspdk_bdev_delay.so 00:10:27.648 SYMLINK libspdk_bdev_lvol.so 00:10:27.648 CC module/bdev/nvme/vbdev_opal.o 00:10:27.648 CC module/bdev/nvme/vbdev_opal_rpc.o 00:10:27.648 CC module/bdev/raid/bdev_raid.o 00:10:27.648 CC module/bdev/passthru/vbdev_passthru.o 00:10:27.648 CC module/bdev/split/vbdev_split.o 00:10:27.648 CC module/bdev/aio/bdev_aio.o 00:10:27.648 CC module/bdev/zone_block/vbdev_zone_block.o 00:10:27.648 CC module/bdev/ftl/bdev_ftl.o 00:10:27.905 CC module/bdev/ftl/bdev_ftl_rpc.o 00:10:27.905 CC module/bdev/raid/bdev_raid_rpc.o 00:10:27.905 CC module/bdev/raid/bdev_raid_sb.o 00:10:27.905 CC module/bdev/split/vbdev_split_rpc.o 00:10:27.905 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:10:28.164 CC module/bdev/raid/raid0.o 00:10:28.164 LIB libspdk_bdev_ftl.a 00:10:28.164 CC module/bdev/raid/raid1.o 00:10:28.164 CC module/bdev/aio/bdev_aio_rpc.o 00:10:28.164 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:10:28.164 SO libspdk_bdev_ftl.so.6.0 00:10:28.164 LIB libspdk_bdev_split.a 00:10:28.164 LIB libspdk_bdev_passthru.a 00:10:28.164 SYMLINK libspdk_bdev_ftl.so 00:10:28.164 SO libspdk_bdev_passthru.so.6.0 00:10:28.164 SO libspdk_bdev_split.so.6.0 00:10:28.164 CC module/bdev/raid/concat.o 00:10:28.164 SYMLINK libspdk_bdev_passthru.so 00:10:28.164 LIB libspdk_bdev_aio.a 00:10:28.164 LIB libspdk_bdev_zone_block.a 00:10:28.164 SYMLINK libspdk_bdev_split.so 00:10:28.164 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:10:28.422 SO libspdk_bdev_aio.so.6.0 00:10:28.422 SO libspdk_bdev_zone_block.so.6.0 00:10:28.422 SYMLINK libspdk_bdev_aio.so 00:10:28.422 SYMLINK libspdk_bdev_zone_block.so 00:10:28.422 CC module/bdev/iscsi/bdev_iscsi.o 00:10:28.422 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:10:28.422 CC module/bdev/virtio/bdev_virtio_scsi.o 00:10:28.422 CC module/bdev/virtio/bdev_virtio_blk.o 00:10:28.422 CC module/bdev/virtio/bdev_virtio_rpc.o 00:10:28.680 LIB libspdk_bdev_raid.a 00:10:28.680 SO libspdk_bdev_raid.so.6.0 00:10:28.680 LIB libspdk_bdev_iscsi.a 00:10:28.680 SO libspdk_bdev_iscsi.so.6.0 00:10:28.938 SYMLINK libspdk_bdev_raid.so 00:10:28.938 SYMLINK libspdk_bdev_iscsi.so 00:10:28.938 LIB libspdk_bdev_virtio.a 00:10:28.938 SO libspdk_bdev_virtio.so.6.0 00:10:29.196 SYMLINK libspdk_bdev_virtio.so 00:10:29.196 LIB libspdk_bdev_nvme.a 00:10:29.196 SO libspdk_bdev_nvme.so.7.0 00:10:29.454 SYMLINK libspdk_bdev_nvme.so 00:10:30.019 CC module/event/subsystems/scheduler/scheduler.o 00:10:30.019 CC module/event/subsystems/sock/sock.o 00:10:30.019 CC module/event/subsystems/vmd/vmd.o 00:10:30.019 CC module/event/subsystems/vmd/vmd_rpc.o 00:10:30.019 CC module/event/subsystems/iobuf/iobuf.o 00:10:30.019 CC module/event/subsystems/keyring/keyring.o 00:10:30.019 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:10:30.019 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:10:30.019 LIB libspdk_event_scheduler.a 00:10:30.019 LIB libspdk_event_sock.a 00:10:30.019 LIB libspdk_event_keyring.a 00:10:30.019 LIB libspdk_event_vmd.a 00:10:30.019 SO libspdk_event_scheduler.so.4.0 00:10:30.019 LIB libspdk_event_vhost_blk.a 00:10:30.019 SO libspdk_event_sock.so.5.0 00:10:30.019 SO libspdk_event_keyring.so.1.0 00:10:30.019 LIB libspdk_event_iobuf.a 00:10:30.019 SO libspdk_event_vmd.so.6.0 00:10:30.019 SO libspdk_event_vhost_blk.so.3.0 00:10:30.277 SO libspdk_event_iobuf.so.3.0 00:10:30.277 SYMLINK libspdk_event_sock.so 00:10:30.277 SYMLINK libspdk_event_keyring.so 00:10:30.277 SYMLINK libspdk_event_scheduler.so 00:10:30.277 SYMLINK libspdk_event_vmd.so 00:10:30.277 SYMLINK libspdk_event_vhost_blk.so 00:10:30.277 SYMLINK libspdk_event_iobuf.so 00:10:30.534 CC module/event/subsystems/accel/accel.o 00:10:30.791 LIB libspdk_event_accel.a 00:10:30.791 SO libspdk_event_accel.so.6.0 00:10:30.791 SYMLINK libspdk_event_accel.so 00:10:31.049 CC module/event/subsystems/bdev/bdev.o 00:10:31.307 LIB libspdk_event_bdev.a 00:10:31.307 SO libspdk_event_bdev.so.6.0 00:10:31.307 SYMLINK libspdk_event_bdev.so 00:10:31.565 CC module/event/subsystems/scsi/scsi.o 00:10:31.565 CC module/event/subsystems/ublk/ublk.o 00:10:31.565 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:10:31.565 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:10:31.565 CC module/event/subsystems/nbd/nbd.o 00:10:31.822 LIB libspdk_event_nbd.a 00:10:31.822 LIB libspdk_event_ublk.a 00:10:31.822 SO libspdk_event_nbd.so.6.0 00:10:31.822 SO libspdk_event_ublk.so.3.0 00:10:31.822 LIB libspdk_event_scsi.a 00:10:31.822 SO libspdk_event_scsi.so.6.0 00:10:31.822 SYMLINK libspdk_event_nbd.so 00:10:31.822 SYMLINK libspdk_event_ublk.so 00:10:31.822 LIB libspdk_event_nvmf.a 00:10:31.822 SYMLINK libspdk_event_scsi.so 00:10:31.822 SO libspdk_event_nvmf.so.6.0 00:10:32.079 SYMLINK libspdk_event_nvmf.so 00:10:32.079 CC module/event/subsystems/iscsi/iscsi.o 00:10:32.079 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:10:32.337 LIB libspdk_event_iscsi.a 00:10:32.337 LIB libspdk_event_vhost_scsi.a 00:10:32.337 SO libspdk_event_iscsi.so.6.0 00:10:32.337 SO libspdk_event_vhost_scsi.so.3.0 00:10:32.337 SYMLINK libspdk_event_vhost_scsi.so 00:10:32.337 SYMLINK libspdk_event_iscsi.so 00:10:32.596 SO libspdk.so.6.0 00:10:32.596 SYMLINK libspdk.so 00:10:32.854 CXX app/trace/trace.o 00:10:32.854 CC app/spdk_lspci/spdk_lspci.o 00:10:32.854 CC app/spdk_nvme_perf/perf.o 00:10:32.854 CC app/trace_record/trace_record.o 00:10:32.854 CC app/spdk_nvme_identify/identify.o 00:10:32.854 CC app/nvmf_tgt/nvmf_main.o 00:10:32.854 CC app/iscsi_tgt/iscsi_tgt.o 00:10:32.854 CC app/spdk_tgt/spdk_tgt.o 00:10:32.854 CC examples/accel/perf/accel_perf.o 00:10:32.854 CC test/accel/dif/dif.o 00:10:32.854 LINK spdk_lspci 00:10:33.112 LINK nvmf_tgt 00:10:33.112 LINK iscsi_tgt 00:10:33.112 LINK spdk_tgt 00:10:33.112 LINK spdk_trace_record 00:10:33.112 LINK spdk_trace 00:10:33.370 LINK dif 00:10:33.370 CC test/app/bdev_svc/bdev_svc.o 00:10:33.370 LINK accel_perf 00:10:33.370 CC test/app/histogram_perf/histogram_perf.o 00:10:33.370 CC app/spdk_nvme_discover/discovery_aer.o 00:10:33.370 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:10:33.628 LINK bdev_svc 00:10:33.628 CC examples/bdev/hello_world/hello_bdev.o 00:10:33.628 LINK histogram_perf 00:10:33.628 LINK spdk_nvme_identify 00:10:33.628 CC test/app/jsoncat/jsoncat.o 00:10:33.628 LINK spdk_nvme_perf 00:10:33.628 CC test/app/stub/stub.o 00:10:33.628 CC test/bdev/bdevio/bdevio.o 00:10:33.628 LINK spdk_nvme_discover 00:10:33.628 CC app/spdk_top/spdk_top.o 00:10:33.628 LINK jsoncat 00:10:33.914 LINK hello_bdev 00:10:33.914 LINK stub 00:10:33.914 CC app/vhost/vhost.o 00:10:33.914 LINK nvme_fuzz 00:10:33.914 CC app/spdk_dd/spdk_dd.o 00:10:33.914 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:10:33.914 CC app/fio/nvme/fio_plugin.o 00:10:34.182 LINK bdevio 00:10:34.182 LINK vhost 00:10:34.182 CC app/fio/bdev/fio_plugin.o 00:10:34.182 CC examples/bdev/bdevperf/bdevperf.o 00:10:34.182 CC test/blobfs/mkfs/mkfs.o 00:10:34.182 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:10:34.182 LINK spdk_dd 00:10:34.182 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:10:34.440 LINK mkfs 00:10:34.440 TEST_HEADER include/spdk/accel.h 00:10:34.440 TEST_HEADER include/spdk/accel_module.h 00:10:34.440 TEST_HEADER include/spdk/assert.h 00:10:34.440 TEST_HEADER include/spdk/barrier.h 00:10:34.440 TEST_HEADER include/spdk/base64.h 00:10:34.440 TEST_HEADER include/spdk/bdev.h 00:10:34.440 TEST_HEADER include/spdk/bdev_module.h 00:10:34.440 TEST_HEADER include/spdk/bdev_zone.h 00:10:34.440 TEST_HEADER include/spdk/bit_array.h 00:10:34.440 TEST_HEADER include/spdk/bit_pool.h 00:10:34.440 TEST_HEADER include/spdk/blob_bdev.h 00:10:34.440 TEST_HEADER include/spdk/blobfs_bdev.h 00:10:34.440 TEST_HEADER include/spdk/blobfs.h 00:10:34.440 TEST_HEADER include/spdk/blob.h 00:10:34.440 TEST_HEADER include/spdk/conf.h 00:10:34.440 TEST_HEADER include/spdk/config.h 00:10:34.440 TEST_HEADER include/spdk/cpuset.h 00:10:34.440 TEST_HEADER include/spdk/crc16.h 00:10:34.440 TEST_HEADER include/spdk/crc32.h 00:10:34.440 TEST_HEADER include/spdk/crc64.h 00:10:34.440 TEST_HEADER include/spdk/dif.h 00:10:34.440 TEST_HEADER include/spdk/dma.h 00:10:34.440 TEST_HEADER include/spdk/endian.h 00:10:34.440 TEST_HEADER include/spdk/env_dpdk.h 00:10:34.440 TEST_HEADER include/spdk/env.h 00:10:34.440 TEST_HEADER include/spdk/event.h 00:10:34.440 TEST_HEADER include/spdk/fd_group.h 00:10:34.440 TEST_HEADER include/spdk/fd.h 00:10:34.440 TEST_HEADER include/spdk/file.h 00:10:34.440 TEST_HEADER include/spdk/ftl.h 00:10:34.440 TEST_HEADER include/spdk/gpt_spec.h 00:10:34.440 TEST_HEADER include/spdk/hexlify.h 00:10:34.440 TEST_HEADER include/spdk/histogram_data.h 00:10:34.440 TEST_HEADER include/spdk/idxd.h 00:10:34.440 TEST_HEADER include/spdk/idxd_spec.h 00:10:34.440 TEST_HEADER include/spdk/init.h 00:10:34.440 TEST_HEADER include/spdk/ioat.h 00:10:34.440 TEST_HEADER include/spdk/ioat_spec.h 00:10:34.440 TEST_HEADER include/spdk/iscsi_spec.h 00:10:34.440 TEST_HEADER include/spdk/json.h 00:10:34.440 TEST_HEADER include/spdk/jsonrpc.h 00:10:34.440 TEST_HEADER include/spdk/keyring.h 00:10:34.440 TEST_HEADER include/spdk/keyring_module.h 00:10:34.440 TEST_HEADER include/spdk/likely.h 00:10:34.440 TEST_HEADER include/spdk/log.h 00:10:34.440 TEST_HEADER include/spdk/lvol.h 00:10:34.440 TEST_HEADER include/spdk/memory.h 00:10:34.440 TEST_HEADER include/spdk/mmio.h 00:10:34.440 TEST_HEADER include/spdk/nbd.h 00:10:34.440 TEST_HEADER include/spdk/notify.h 00:10:34.440 TEST_HEADER include/spdk/nvme.h 00:10:34.440 TEST_HEADER include/spdk/nvme_intel.h 00:10:34.440 TEST_HEADER include/spdk/nvme_ocssd.h 00:10:34.440 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:10:34.440 TEST_HEADER include/spdk/nvme_spec.h 00:10:34.440 TEST_HEADER include/spdk/nvme_zns.h 00:10:34.440 TEST_HEADER include/spdk/nvmf_cmd.h 00:10:34.440 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:10:34.440 TEST_HEADER include/spdk/nvmf.h 00:10:34.440 TEST_HEADER include/spdk/nvmf_spec.h 00:10:34.440 CC test/dma/test_dma/test_dma.o 00:10:34.440 TEST_HEADER include/spdk/nvmf_transport.h 00:10:34.440 TEST_HEADER include/spdk/opal.h 00:10:34.440 TEST_HEADER include/spdk/opal_spec.h 00:10:34.440 TEST_HEADER include/spdk/pci_ids.h 00:10:34.440 TEST_HEADER include/spdk/pipe.h 00:10:34.698 TEST_HEADER include/spdk/queue.h 00:10:34.698 TEST_HEADER include/spdk/reduce.h 00:10:34.698 TEST_HEADER include/spdk/rpc.h 00:10:34.698 TEST_HEADER include/spdk/scheduler.h 00:10:34.698 TEST_HEADER include/spdk/scsi.h 00:10:34.698 TEST_HEADER include/spdk/scsi_spec.h 00:10:34.698 TEST_HEADER include/spdk/sock.h 00:10:34.698 TEST_HEADER include/spdk/stdinc.h 00:10:34.698 TEST_HEADER include/spdk/string.h 00:10:34.698 TEST_HEADER include/spdk/thread.h 00:10:34.698 TEST_HEADER include/spdk/trace.h 00:10:34.698 TEST_HEADER include/spdk/trace_parser.h 00:10:34.698 TEST_HEADER include/spdk/tree.h 00:10:34.698 TEST_HEADER include/spdk/ublk.h 00:10:34.698 TEST_HEADER include/spdk/util.h 00:10:34.698 TEST_HEADER include/spdk/uuid.h 00:10:34.698 TEST_HEADER include/spdk/version.h 00:10:34.698 TEST_HEADER include/spdk/vfio_user_pci.h 00:10:34.698 TEST_HEADER include/spdk/vfio_user_spec.h 00:10:34.698 TEST_HEADER include/spdk/vhost.h 00:10:34.698 TEST_HEADER include/spdk/vmd.h 00:10:34.698 TEST_HEADER include/spdk/xor.h 00:10:34.698 TEST_HEADER include/spdk/zipf.h 00:10:34.698 CXX test/cpp_headers/accel.o 00:10:34.698 LINK spdk_bdev 00:10:34.698 LINK spdk_nvme 00:10:34.698 LINK spdk_top 00:10:34.698 LINK vhost_fuzz 00:10:34.698 CC test/env/mem_callbacks/mem_callbacks.o 00:10:34.698 CXX test/cpp_headers/accel_module.o 00:10:34.698 CC test/event/event_perf/event_perf.o 00:10:34.956 LINK bdevperf 00:10:34.956 CXX test/cpp_headers/assert.o 00:10:34.956 LINK event_perf 00:10:34.956 CC test/rpc_client/rpc_client_test.o 00:10:34.956 LINK test_dma 00:10:34.956 CC test/nvme/aer/aer.o 00:10:34.956 CC test/lvol/esnap/esnap.o 00:10:35.214 CXX test/cpp_headers/barrier.o 00:10:35.214 CC examples/blob/hello_world/hello_blob.o 00:10:35.214 LINK rpc_client_test 00:10:35.214 CC test/event/reactor/reactor.o 00:10:35.214 CC test/event/reactor_perf/reactor_perf.o 00:10:35.214 CXX test/cpp_headers/base64.o 00:10:35.214 CC test/event/app_repeat/app_repeat.o 00:10:35.214 LINK aer 00:10:35.214 LINK mem_callbacks 00:10:35.471 LINK reactor 00:10:35.472 LINK hello_blob 00:10:35.472 LINK reactor_perf 00:10:35.472 CC test/event/scheduler/scheduler.o 00:10:35.472 CXX test/cpp_headers/bdev.o 00:10:35.472 LINK app_repeat 00:10:35.472 LINK iscsi_fuzz 00:10:35.472 CC test/env/vtophys/vtophys.o 00:10:35.472 CXX test/cpp_headers/bdev_module.o 00:10:35.472 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:10:35.729 CC test/nvme/reset/reset.o 00:10:35.729 CC examples/blob/cli/blobcli.o 00:10:35.729 LINK scheduler 00:10:35.729 CC test/nvme/sgl/sgl.o 00:10:35.729 CC test/nvme/e2edp/nvme_dp.o 00:10:35.729 LINK vtophys 00:10:35.729 LINK env_dpdk_post_init 00:10:35.729 CXX test/cpp_headers/bdev_zone.o 00:10:35.729 CXX test/cpp_headers/bit_array.o 00:10:35.987 CXX test/cpp_headers/bit_pool.o 00:10:35.987 LINK reset 00:10:35.987 CXX test/cpp_headers/blob_bdev.o 00:10:35.987 LINK sgl 00:10:35.987 LINK nvme_dp 00:10:35.987 CC test/env/memory/memory_ut.o 00:10:35.987 CXX test/cpp_headers/blobfs_bdev.o 00:10:35.987 CC test/nvme/overhead/overhead.o 00:10:36.246 CC test/nvme/err_injection/err_injection.o 00:10:36.246 LINK blobcli 00:10:36.246 CC test/nvme/startup/startup.o 00:10:36.246 CXX test/cpp_headers/blobfs.o 00:10:36.246 CC examples/ioat/perf/perf.o 00:10:36.246 CC examples/nvme/hello_world/hello_world.o 00:10:36.246 CC examples/sock/hello_world/hello_sock.o 00:10:36.246 LINK err_injection 00:10:36.246 LINK overhead 00:10:36.504 CXX test/cpp_headers/blob.o 00:10:36.504 LINK startup 00:10:36.504 CC examples/nvme/reconnect/reconnect.o 00:10:36.504 LINK ioat_perf 00:10:36.504 LINK hello_world 00:10:36.504 LINK hello_sock 00:10:36.504 CC examples/nvme/nvme_manage/nvme_manage.o 00:10:36.504 CC examples/nvme/arbitration/arbitration.o 00:10:36.504 CXX test/cpp_headers/conf.o 00:10:36.761 CC examples/ioat/verify/verify.o 00:10:36.761 CC test/nvme/reserve/reserve.o 00:10:36.761 CXX test/cpp_headers/config.o 00:10:36.761 CC test/nvme/simple_copy/simple_copy.o 00:10:36.761 LINK reconnect 00:10:36.761 CXX test/cpp_headers/cpuset.o 00:10:36.761 CC test/nvme/connect_stress/connect_stress.o 00:10:37.019 LINK arbitration 00:10:37.019 LINK reserve 00:10:37.019 LINK verify 00:10:37.019 CXX test/cpp_headers/crc16.o 00:10:37.019 LINK memory_ut 00:10:37.019 LINK connect_stress 00:10:37.019 LINK simple_copy 00:10:37.019 CC test/nvme/boot_partition/boot_partition.o 00:10:37.019 LINK nvme_manage 00:10:37.019 CXX test/cpp_headers/crc32.o 00:10:37.277 CC test/nvme/compliance/nvme_compliance.o 00:10:37.277 CC test/nvme/fused_ordering/fused_ordering.o 00:10:37.277 CC test/nvme/doorbell_aers/doorbell_aers.o 00:10:37.277 LINK boot_partition 00:10:37.277 CC test/nvme/fdp/fdp.o 00:10:37.277 CXX test/cpp_headers/crc64.o 00:10:37.277 CC examples/nvme/hotplug/hotplug.o 00:10:37.277 CC test/env/pci/pci_ut.o 00:10:37.277 CC test/thread/poller_perf/poller_perf.o 00:10:37.536 LINK fused_ordering 00:10:37.536 LINK doorbell_aers 00:10:37.536 CXX test/cpp_headers/dif.o 00:10:37.536 CC examples/nvme/cmb_copy/cmb_copy.o 00:10:37.536 LINK nvme_compliance 00:10:37.536 LINK poller_perf 00:10:37.536 LINK hotplug 00:10:37.536 LINK fdp 00:10:37.793 CXX test/cpp_headers/dma.o 00:10:37.793 CXX test/cpp_headers/endian.o 00:10:37.793 CC test/nvme/cuse/cuse.o 00:10:37.793 LINK cmb_copy 00:10:37.793 LINK pci_ut 00:10:37.793 CC examples/vmd/lsvmd/lsvmd.o 00:10:37.793 CC examples/vmd/led/led.o 00:10:37.793 CXX test/cpp_headers/env_dpdk.o 00:10:38.052 LINK lsvmd 00:10:38.052 LINK led 00:10:38.052 CC examples/nvme/abort/abort.o 00:10:38.052 CC examples/util/zipf/zipf.o 00:10:38.052 CC examples/nvmf/nvmf/nvmf.o 00:10:38.052 CC examples/thread/thread/thread_ex.o 00:10:38.052 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:10:38.052 CXX test/cpp_headers/env.o 00:10:38.052 LINK zipf 00:10:38.310 CXX test/cpp_headers/event.o 00:10:38.310 LINK pmr_persistence 00:10:38.310 CC examples/idxd/perf/perf.o 00:10:38.310 LINK nvmf 00:10:38.310 LINK thread 00:10:38.310 LINK abort 00:10:38.568 CXX test/cpp_headers/fd_group.o 00:10:38.568 CXX test/cpp_headers/fd.o 00:10:38.568 CC examples/interrupt_tgt/interrupt_tgt.o 00:10:38.568 CXX test/cpp_headers/file.o 00:10:38.568 CXX test/cpp_headers/ftl.o 00:10:38.568 CXX test/cpp_headers/gpt_spec.o 00:10:38.568 LINK idxd_perf 00:10:38.568 CXX test/cpp_headers/hexlify.o 00:10:38.568 CXX test/cpp_headers/histogram_data.o 00:10:38.568 CXX test/cpp_headers/idxd.o 00:10:38.568 LINK interrupt_tgt 00:10:38.826 LINK cuse 00:10:38.826 CXX test/cpp_headers/idxd_spec.o 00:10:38.826 CXX test/cpp_headers/init.o 00:10:38.826 CXX test/cpp_headers/ioat.o 00:10:38.826 CXX test/cpp_headers/ioat_spec.o 00:10:38.826 CXX test/cpp_headers/iscsi_spec.o 00:10:38.826 CXX test/cpp_headers/json.o 00:10:38.826 CXX test/cpp_headers/jsonrpc.o 00:10:39.085 CXX test/cpp_headers/keyring.o 00:10:39.085 CXX test/cpp_headers/keyring_module.o 00:10:39.085 CXX test/cpp_headers/likely.o 00:10:39.085 CXX test/cpp_headers/log.o 00:10:39.085 CXX test/cpp_headers/lvol.o 00:10:39.085 CXX test/cpp_headers/memory.o 00:10:39.085 CXX test/cpp_headers/mmio.o 00:10:39.085 CXX test/cpp_headers/nbd.o 00:10:39.085 CXX test/cpp_headers/notify.o 00:10:39.085 CXX test/cpp_headers/nvme.o 00:10:39.085 CXX test/cpp_headers/nvme_intel.o 00:10:39.085 CXX test/cpp_headers/nvme_ocssd.o 00:10:39.085 CXX test/cpp_headers/nvme_ocssd_spec.o 00:10:39.085 CXX test/cpp_headers/nvme_spec.o 00:10:39.342 CXX test/cpp_headers/nvme_zns.o 00:10:39.343 CXX test/cpp_headers/nvmf_cmd.o 00:10:39.343 CXX test/cpp_headers/nvmf_fc_spec.o 00:10:39.343 CXX test/cpp_headers/nvmf.o 00:10:39.343 CXX test/cpp_headers/nvmf_spec.o 00:10:39.343 CXX test/cpp_headers/nvmf_transport.o 00:10:39.343 CXX test/cpp_headers/opal.o 00:10:39.343 CXX test/cpp_headers/opal_spec.o 00:10:39.600 CXX test/cpp_headers/pci_ids.o 00:10:39.600 CXX test/cpp_headers/pipe.o 00:10:39.600 CXX test/cpp_headers/queue.o 00:10:39.600 CXX test/cpp_headers/reduce.o 00:10:39.600 LINK esnap 00:10:39.600 CXX test/cpp_headers/rpc.o 00:10:39.600 CXX test/cpp_headers/scheduler.o 00:10:39.600 CXX test/cpp_headers/scsi.o 00:10:39.600 CXX test/cpp_headers/scsi_spec.o 00:10:39.600 CXX test/cpp_headers/sock.o 00:10:39.600 CXX test/cpp_headers/stdinc.o 00:10:39.600 CXX test/cpp_headers/string.o 00:10:39.858 CXX test/cpp_headers/thread.o 00:10:39.858 CXX test/cpp_headers/trace.o 00:10:39.858 CXX test/cpp_headers/trace_parser.o 00:10:39.858 CXX test/cpp_headers/tree.o 00:10:39.858 CXX test/cpp_headers/ublk.o 00:10:39.858 CXX test/cpp_headers/util.o 00:10:39.858 CXX test/cpp_headers/uuid.o 00:10:39.858 CXX test/cpp_headers/version.o 00:10:39.858 CXX test/cpp_headers/vfio_user_pci.o 00:10:39.858 CXX test/cpp_headers/vfio_user_spec.o 00:10:39.858 CXX test/cpp_headers/vhost.o 00:10:40.116 CXX test/cpp_headers/vmd.o 00:10:40.116 CXX test/cpp_headers/xor.o 00:10:40.116 CXX test/cpp_headers/zipf.o 00:10:45.380 00:10:45.380 real 1m2.276s 00:10:45.380 user 5m48.144s 00:10:45.380 sys 1m13.890s 00:10:45.380 13:28:57 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:10:45.380 13:28:57 make -- common/autotest_common.sh@10 -- $ set +x 00:10:45.380 ************************************ 00:10:45.380 END TEST make 00:10:45.380 ************************************ 00:10:45.380 13:28:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:10:45.380 13:28:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:10:45.380 13:28:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:10:45.380 13:28:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:45.380 13:28:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:10:45.380 13:28:57 -- pm/common@44 -- $ pid=6034 00:10:45.380 13:28:57 -- pm/common@50 -- $ kill -TERM 6034 00:10:45.380 13:28:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:10:45.380 13:28:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:10:45.380 13:28:57 -- pm/common@44 -- $ pid=6036 00:10:45.380 13:28:57 -- pm/common@50 -- $ kill -TERM 6036 00:10:45.380 13:28:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:45.380 13:28:57 -- nvmf/common.sh@7 -- # uname -s 00:10:45.380 13:28:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.380 13:28:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.380 13:28:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.380 13:28:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.380 13:28:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.380 13:28:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.380 13:28:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.380 13:28:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.381 13:28:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.381 13:28:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.381 13:28:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:10:45.381 13:28:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:10:45.381 13:28:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.381 13:28:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.381 13:28:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:45.381 13:28:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.381 13:28:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:45.381 13:28:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.381 13:28:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.381 13:28:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.381 13:28:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.381 13:28:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.381 13:28:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.381 13:28:57 -- paths/export.sh@5 -- # export PATH 00:10:45.381 13:28:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.381 13:28:57 -- nvmf/common.sh@47 -- # : 0 00:10:45.381 13:28:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:45.381 13:28:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:45.381 13:28:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.381 13:28:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.381 13:28:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.381 13:28:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:45.381 13:28:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:45.381 13:28:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:45.381 13:28:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:10:45.381 13:28:57 -- spdk/autotest.sh@32 -- # uname -s 00:10:45.381 13:28:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:10:45.381 13:28:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:10:45.381 13:28:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:45.381 13:28:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:10:45.381 13:28:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:10:45.381 13:28:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:10:45.381 13:28:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:10:45.381 13:28:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:10:45.381 13:28:57 -- spdk/autotest.sh@48 -- # udevadm_pid=67985 00:10:45.381 13:28:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:10:45.381 13:28:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:10:45.381 13:28:57 -- pm/common@17 -- # local monitor 00:10:45.381 13:28:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:45.381 13:28:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:10:45.381 13:28:57 -- pm/common@25 -- # sleep 1 00:10:45.381 13:28:57 -- pm/common@21 -- # date +%s 00:10:45.381 13:28:57 -- pm/common@21 -- # date +%s 00:10:45.381 13:28:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715779737 00:10:45.381 13:28:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715779737 00:10:45.381 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715779737_collect-cpu-load.pm.log 00:10:45.381 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715779737_collect-vmstat.pm.log 00:10:45.948 13:28:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:10:45.948 13:28:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:10:45.948 13:28:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:45.948 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:10:45.948 13:28:58 -- spdk/autotest.sh@59 -- # create_test_list 00:10:45.948 13:28:58 -- common/autotest_common.sh@744 -- # xtrace_disable 00:10:45.948 13:28:58 -- common/autotest_common.sh@10 -- # set +x 00:10:45.948 13:28:58 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:10:45.948 13:28:58 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:10:45.948 13:28:58 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:10:45.948 13:28:58 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:10:45.948 13:28:58 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:10:45.948 13:28:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:10:45.948 13:28:58 -- common/autotest_common.sh@1451 -- # uname 00:10:45.948 13:28:58 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:10:45.948 13:28:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:10:45.948 13:28:58 -- common/autotest_common.sh@1471 -- # uname 00:10:45.948 13:28:58 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:10:45.948 13:28:58 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:10:45.948 13:28:58 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:10:45.948 13:28:58 -- spdk/autotest.sh@72 -- # hash lcov 00:10:45.948 13:28:59 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:10:45.948 13:28:59 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:10:45.948 --rc lcov_branch_coverage=1 00:10:45.948 --rc lcov_function_coverage=1 00:10:45.948 --rc genhtml_branch_coverage=1 00:10:45.948 --rc genhtml_function_coverage=1 00:10:45.948 --rc genhtml_legend=1 00:10:45.948 --rc geninfo_all_blocks=1 00:10:45.948 ' 00:10:45.948 13:28:59 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:10:45.948 --rc lcov_branch_coverage=1 00:10:45.948 --rc lcov_function_coverage=1 00:10:45.948 --rc genhtml_branch_coverage=1 00:10:45.948 --rc genhtml_function_coverage=1 00:10:45.948 --rc genhtml_legend=1 00:10:45.948 --rc geninfo_all_blocks=1 00:10:45.948 ' 00:10:45.948 13:28:59 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:10:45.948 --rc lcov_branch_coverage=1 00:10:45.948 --rc lcov_function_coverage=1 00:10:45.948 --rc genhtml_branch_coverage=1 00:10:45.948 --rc genhtml_function_coverage=1 00:10:45.948 --rc genhtml_legend=1 00:10:45.948 --rc geninfo_all_blocks=1 00:10:45.948 --no-external' 00:10:45.948 13:28:59 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:10:45.948 --rc lcov_branch_coverage=1 00:10:45.948 --rc lcov_function_coverage=1 00:10:45.948 --rc genhtml_branch_coverage=1 00:10:45.948 --rc genhtml_function_coverage=1 00:10:45.948 --rc genhtml_legend=1 00:10:45.948 --rc geninfo_all_blocks=1 00:10:45.948 --no-external' 00:10:45.948 13:28:59 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:10:46.207 lcov: LCOV version 1.14 00:10:46.207 13:28:59 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:10:54.322 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:10:54.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:10:54.322 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:10:54.322 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:10:54.323 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:10:54.323 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:11:00.879 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:11:00.879 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:11:13.077 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:11:13.077 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:11:13.078 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:11:13.078 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:11:13.079 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:11:13.079 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:11:16.359 13:29:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:11:16.359 13:29:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:16.359 13:29:29 -- common/autotest_common.sh@10 -- # set +x 00:11:16.359 13:29:29 -- spdk/autotest.sh@91 -- # rm -f 00:11:16.359 13:29:29 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:16.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:16.878 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:11:16.878 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:11:16.878 13:29:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:11:16.878 13:29:29 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:16.878 13:29:29 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:16.878 13:29:29 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:16.878 13:29:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:16.878 13:29:29 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:16.878 13:29:29 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:16.878 13:29:29 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:16.878 13:29:29 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:16.878 13:29:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:16.878 13:29:29 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:16.878 13:29:29 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:16.878 13:29:29 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:16.878 13:29:29 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:16.878 13:29:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:16.878 13:29:29 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:11:16.878 13:29:29 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:11:16.878 13:29:29 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:16.878 13:29:29 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:16.878 13:29:29 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:16.878 13:29:29 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:11:16.878 13:29:29 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:11:16.878 13:29:29 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:16.878 13:29:29 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:16.878 13:29:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:11:16.878 13:29:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:16.878 13:29:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:16.878 13:29:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:11:16.878 13:29:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:11:16.878 13:29:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:16.878 No valid GPT data, bailing 00:11:16.878 13:29:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:16.879 13:29:29 -- scripts/common.sh@391 -- # pt= 00:11:16.879 13:29:29 -- scripts/common.sh@392 -- # return 1 00:11:16.879 13:29:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:11:16.879 1+0 records in 00:11:16.879 1+0 records out 00:11:16.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395188 s, 265 MB/s 00:11:16.879 13:29:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:16.879 13:29:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:16.879 13:29:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:11:16.879 13:29:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:11:16.879 13:29:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:11:16.879 No valid GPT data, bailing 00:11:16.879 13:29:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:16.879 13:29:29 -- scripts/common.sh@391 -- # pt= 00:11:16.879 13:29:29 -- scripts/common.sh@392 -- # return 1 00:11:16.879 13:29:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:11:16.879 1+0 records in 00:11:16.879 1+0 records out 00:11:16.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431041 s, 243 MB/s 00:11:16.879 13:29:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:16.879 13:29:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:16.879 13:29:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:11:16.879 13:29:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:11:16.879 13:29:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:11:17.140 No valid GPT data, bailing 00:11:17.140 13:29:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:11:17.140 13:29:30 -- scripts/common.sh@391 -- # pt= 00:11:17.140 13:29:30 -- scripts/common.sh@392 -- # return 1 00:11:17.140 13:29:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:11:17.140 1+0 records in 00:11:17.140 1+0 records out 00:11:17.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493415 s, 213 MB/s 00:11:17.140 13:29:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:11:17.140 13:29:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:11:17.140 13:29:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:11:17.140 13:29:30 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:11:17.140 13:29:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:11:17.140 No valid GPT data, bailing 00:11:17.140 13:29:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:11:17.140 13:29:30 -- scripts/common.sh@391 -- # pt= 00:11:17.140 13:29:30 -- scripts/common.sh@392 -- # return 1 00:11:17.140 13:29:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:11:17.140 1+0 records in 00:11:17.140 1+0 records out 00:11:17.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00366715 s, 286 MB/s 00:11:17.140 13:29:30 -- spdk/autotest.sh@118 -- # sync 00:11:17.140 13:29:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:11:17.140 13:29:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:11:17.140 13:29:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:11:19.040 13:29:31 -- spdk/autotest.sh@124 -- # uname -s 00:11:19.040 13:29:31 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:11:19.040 13:29:31 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:19.040 13:29:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:19.040 13:29:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.040 13:29:31 -- common/autotest_common.sh@10 -- # set +x 00:11:19.040 ************************************ 00:11:19.040 START TEST setup.sh 00:11:19.040 ************************************ 00:11:19.040 13:29:31 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:11:19.040 * Looking for test storage... 00:11:19.040 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:19.040 13:29:32 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:11:19.040 13:29:32 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:11:19.040 13:29:32 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:19.040 13:29:32 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:19.040 13:29:32 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.040 13:29:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:19.040 ************************************ 00:11:19.040 START TEST acl 00:11:19.040 ************************************ 00:11:19.040 13:29:32 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:11:19.040 * Looking for test storage... 00:11:19.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:19.298 13:29:32 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:11:19.298 13:29:32 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:19.298 13:29:32 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:11:19.298 13:29:32 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:11:19.298 13:29:32 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:11:19.298 13:29:32 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:11:19.298 13:29:32 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:11:19.298 13:29:32 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:19.298 13:29:32 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:19.865 13:29:32 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:11:19.865 13:29:32 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:11:19.865 13:29:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:19.865 13:29:32 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:11:19.865 13:29:32 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:11:19.865 13:29:32 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:20.431 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:11:20.431 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:20.431 13:29:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:20.431 Hugepages 00:11:20.431 node hugesize free / total 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:20.688 00:11:20.688 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:11:20.688 13:29:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:11:20.688 13:29:33 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:20.688 13:29:33 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:20.688 13:29:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:20.688 ************************************ 00:11:20.688 START TEST denied 00:11:20.688 ************************************ 00:11:20.688 13:29:33 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:11:20.688 13:29:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:11:20.688 13:29:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:11:20.688 13:29:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:11:20.688 13:29:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:11:20.688 13:29:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:21.619 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:21.620 13:29:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:22.185 00:11:22.185 real 0m1.370s 00:11:22.185 user 0m0.543s 00:11:22.185 sys 0m0.792s 00:11:22.185 13:29:35 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:22.185 13:29:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:11:22.185 ************************************ 00:11:22.185 END TEST denied 00:11:22.185 ************************************ 00:11:22.185 13:29:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:11:22.185 13:29:35 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:22.185 13:29:35 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:22.185 13:29:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:22.185 ************************************ 00:11:22.185 START TEST allowed 00:11:22.185 ************************************ 00:11:22.185 13:29:35 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:11:22.185 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:11:22.185 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:11:22.185 13:29:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:11:22.185 13:29:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:22.185 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:11:23.118 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:23.118 13:29:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:23.684 00:11:23.684 real 0m1.504s 00:11:23.684 user 0m0.645s 00:11:23.684 sys 0m0.847s 00:11:23.684 13:29:36 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:23.684 ************************************ 00:11:23.684 END TEST allowed 00:11:23.684 ************************************ 00:11:23.684 13:29:36 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 ************************************ 00:11:23.684 00:11:23.684 real 0m4.656s 00:11:23.684 user 0m2.009s 00:11:23.684 sys 0m2.607s 00:11:23.684 13:29:36 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:23.684 13:29:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 END TEST acl 00:11:23.684 ************************************ 00:11:23.684 13:29:36 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:23.684 13:29:36 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:23.684 13:29:36 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:23.684 13:29:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:23.684 ************************************ 00:11:23.684 START TEST hugepages 00:11:23.684 ************************************ 00:11:23.684 13:29:36 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:11:23.943 * Looking for test storage... 00:11:23.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4008496 kB' 'MemAvailable: 7380668 kB' 'Buffers: 2436 kB' 'Cached: 3571332 kB' 'SwapCached: 0 kB' 'Active: 873500 kB' 'Inactive: 2804032 kB' 'Active(anon): 114256 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804032 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 316 kB' 'Writeback: 0 kB' 'AnonPages: 105596 kB' 'Mapped: 48716 kB' 'Shmem: 10492 kB' 'KReclaimable: 91668 kB' 'Slab: 173088 kB' 'SReclaimable: 91668 kB' 'SUnreclaim: 81420 kB' 'KernelStack: 6492 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.943 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.944 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:23.945 13:29:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:11:23.945 13:29:36 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:23.945 13:29:36 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:23.945 13:29:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:23.945 ************************************ 00:11:23.945 START TEST default_setup 00:11:23.945 ************************************ 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:11:23.945 13:29:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:24.511 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:24.511 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.775 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6013820 kB' 'MemAvailable: 9385936 kB' 'Buffers: 2436 kB' 'Cached: 3571320 kB' 'SwapCached: 0 kB' 'Active: 890740 kB' 'Inactive: 2804036 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 122592 kB' 'Mapped: 48864 kB' 'Shmem: 10468 kB' 'KReclaimable: 91548 kB' 'Slab: 172904 kB' 'SReclaimable: 91548 kB' 'SUnreclaim: 81356 kB' 'KernelStack: 6464 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.775 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:24.776 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6013824 kB' 'MemAvailable: 9385820 kB' 'Buffers: 2436 kB' 'Cached: 3571320 kB' 'SwapCached: 0 kB' 'Active: 890600 kB' 'Inactive: 2804036 kB' 'Active(anon): 131356 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804036 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172592 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81284 kB' 'KernelStack: 6516 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.777 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:24.778 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6014076 kB' 'MemAvailable: 9386076 kB' 'Buffers: 2436 kB' 'Cached: 3571320 kB' 'SwapCached: 0 kB' 'Active: 890044 kB' 'Inactive: 2804040 kB' 'Active(anon): 130800 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804040 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121980 kB' 'Mapped: 48984 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172588 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81280 kB' 'KernelStack: 6468 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.779 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:24.780 nr_hugepages=1024 00:11:24.780 resv_hugepages=0 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:24.780 surplus_hugepages=0 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:24.780 anon_hugepages=0 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:24.780 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6014076 kB' 'MemAvailable: 9386084 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889844 kB' 'Inactive: 2804048 kB' 'Active(anon): 130600 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121776 kB' 'Mapped: 48924 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172592 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81284 kB' 'KernelStack: 6416 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.781 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6013572 kB' 'MemUsed: 6228400 kB' 'SwapCached: 0 kB' 'Active: 890104 kB' 'Inactive: 2804048 kB' 'Active(anon): 130860 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 3573760 kB' 'Mapped: 48924 kB' 'AnonPages: 122036 kB' 'Shmem: 10468 kB' 'KernelStack: 6484 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91308 kB' 'Slab: 172592 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.782 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.783 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:24.784 node0=1024 expecting 1024 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:24.784 00:11:24.784 real 0m0.949s 00:11:24.784 user 0m0.438s 00:11:24.784 sys 0m0.466s 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:24.784 13:29:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:11:24.784 ************************************ 00:11:24.784 END TEST default_setup 00:11:24.784 ************************************ 00:11:25.042 13:29:37 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:11:25.042 13:29:37 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:25.042 13:29:37 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.042 13:29:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:25.042 ************************************ 00:11:25.042 START TEST per_node_1G_alloc 00:11:25.042 ************************************ 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:25.042 13:29:37 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.303 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:25.303 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7065480 kB' 'MemAvailable: 10437488 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890036 kB' 'Inactive: 2804048 kB' 'Active(anon): 130792 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 121904 kB' 'Mapped: 48916 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172628 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81320 kB' 'KernelStack: 6436 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.303 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.304 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7066132 kB' 'MemAvailable: 10438140 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890044 kB' 'Inactive: 2804048 kB' 'Active(anon): 130800 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121868 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172676 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81368 kB' 'KernelStack: 6464 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.305 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:25.306 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7066132 kB' 'MemAvailable: 10438140 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889952 kB' 'Inactive: 2804048 kB' 'Active(anon): 130708 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121864 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172676 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81368 kB' 'KernelStack: 6464 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.307 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.308 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:25.309 nr_hugepages=512 00:11:25.309 resv_hugepages=0 00:11:25.309 surplus_hugepages=0 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:25.309 anon_hugepages=0 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:11:25.309 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.569 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7066132 kB' 'MemAvailable: 10438140 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889920 kB' 'Inactive: 2804048 kB' 'Active(anon): 130676 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121772 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172672 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81364 kB' 'KernelStack: 6464 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.570 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7074896 kB' 'MemUsed: 5167076 kB' 'SwapCached: 0 kB' 'Active: 889964 kB' 'Inactive: 2804048 kB' 'Active(anon): 130720 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 3573760 kB' 'Mapped: 48728 kB' 'AnonPages: 121856 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91308 kB' 'Slab: 172672 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.571 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.572 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:25.573 node0=512 expecting 512 00:11:25.573 ************************************ 00:11:25.573 END TEST per_node_1G_alloc 00:11:25.573 ************************************ 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:25.573 00:11:25.573 real 0m0.582s 00:11:25.573 user 0m0.284s 00:11:25.573 sys 0m0.278s 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:25.573 13:29:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:25.573 13:29:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:11:25.573 13:29:38 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:25.573 13:29:38 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:25.573 13:29:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:25.573 ************************************ 00:11:25.573 START TEST even_2G_alloc 00:11:25.573 ************************************ 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:25.573 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:25.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.832 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:25.832 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6028208 kB' 'MemAvailable: 9400216 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890384 kB' 'Inactive: 2804048 kB' 'Active(anon): 131140 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122276 kB' 'Mapped: 48848 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172652 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81344 kB' 'KernelStack: 6488 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.832 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.833 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6028208 kB' 'MemAvailable: 9400216 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890144 kB' 'Inactive: 2804048 kB' 'Active(anon): 130900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122032 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172652 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81344 kB' 'KernelStack: 6480 kB' 'PageTables: 4352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:25.834 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.096 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.097 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6028208 kB' 'MemAvailable: 9400216 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889920 kB' 'Inactive: 2804048 kB' 'Active(anon): 130676 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121784 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172652 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81344 kB' 'KernelStack: 6448 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.098 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:26.099 nr_hugepages=1024 00:11:26.099 resv_hugepages=0 00:11:26.099 surplus_hugepages=0 00:11:26.099 anon_hugepages=0 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:26.099 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6028532 kB' 'MemAvailable: 9400540 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889892 kB' 'Inactive: 2804048 kB' 'Active(anon): 130648 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121752 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172652 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81344 kB' 'KernelStack: 6448 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54788 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.100 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:26.101 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6028532 kB' 'MemUsed: 6213440 kB' 'SwapCached: 0 kB' 'Active: 889980 kB' 'Inactive: 2804048 kB' 'Active(anon): 130736 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 3573760 kB' 'Mapped: 48728 kB' 'AnonPages: 121840 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91308 kB' 'Slab: 172652 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81344 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.102 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:11:26.103 node0=1024 expecting 1024 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:26.103 00:11:26.103 real 0m0.535s 00:11:26.103 user 0m0.257s 00:11:26.103 sys 0m0.293s 00:11:26.103 ************************************ 00:11:26.103 END TEST even_2G_alloc 00:11:26.103 ************************************ 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.103 13:29:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:26.103 13:29:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:11:26.103 13:29:39 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:26.103 13:29:39 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.103 13:29:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:26.103 ************************************ 00:11:26.103 START TEST odd_alloc 00:11:26.103 ************************************ 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:26.103 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:26.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:26.361 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:26.361 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:26.622 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6048880 kB' 'MemAvailable: 9420888 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890240 kB' 'Inactive: 2804048 kB' 'Active(anon): 130996 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122156 kB' 'Mapped: 48836 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172660 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81352 kB' 'KernelStack: 6484 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.623 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6048880 kB' 'MemAvailable: 9420888 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889892 kB' 'Inactive: 2804048 kB' 'Active(anon): 130648 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121756 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172656 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81348 kB' 'KernelStack: 6448 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.624 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.625 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6048880 kB' 'MemAvailable: 9420888 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889992 kB' 'Inactive: 2804048 kB' 'Active(anon): 130748 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121872 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172656 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81348 kB' 'KernelStack: 6464 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.626 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.627 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:11:26.628 nr_hugepages=1025 00:11:26.628 resv_hugepages=0 00:11:26.628 surplus_hugepages=0 00:11:26.628 anon_hugepages=0 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6048880 kB' 'MemAvailable: 9420888 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889900 kB' 'Inactive: 2804048 kB' 'Active(anon): 130656 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121764 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172648 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81340 kB' 'KernelStack: 6448 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.628 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.629 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6048880 kB' 'MemUsed: 6193092 kB' 'SwapCached: 0 kB' 'Active: 889920 kB' 'Inactive: 2804048 kB' 'Active(anon): 130676 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 3573760 kB' 'Mapped: 48728 kB' 'AnonPages: 121780 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 4260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91308 kB' 'Slab: 172648 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.630 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:26.631 node0=1025 expecting 1025 00:11:26.631 ************************************ 00:11:26.631 END TEST odd_alloc 00:11:26.631 ************************************ 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:11:26.631 00:11:26.631 real 0m0.533s 00:11:26.631 user 0m0.263s 00:11:26.631 sys 0m0.277s 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.631 13:29:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:26.631 13:29:39 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:11:26.631 13:29:39 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:26.631 13:29:39 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.631 13:29:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:26.631 ************************************ 00:11:26.631 START TEST custom_alloc 00:11:26.631 ************************************ 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:11:26.631 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:11:26.632 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:11:26.632 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:11:26.632 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:11:26.632 13:29:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:11:26.632 13:29:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:26.632 13:29:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:27.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.202 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.202 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7100212 kB' 'MemAvailable: 10472220 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890536 kB' 'Inactive: 2804048 kB' 'Active(anon): 131292 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 122400 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172656 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81348 kB' 'KernelStack: 6420 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.202 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:27.203 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7100296 kB' 'MemAvailable: 10472304 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890060 kB' 'Inactive: 2804048 kB' 'Active(anon): 130816 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121920 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172648 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81340 kB' 'KernelStack: 6456 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.204 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.205 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7100296 kB' 'MemAvailable: 10472304 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889968 kB' 'Inactive: 2804048 kB' 'Active(anon): 130724 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121880 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172644 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81336 kB' 'KernelStack: 6464 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.206 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:27.207 nr_hugepages=512 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:11:27.207 resv_hugepages=0 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:27.207 surplus_hugepages=0 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:27.207 anon_hugepages=0 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.207 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7100296 kB' 'MemAvailable: 10472304 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889912 kB' 'Inactive: 2804048 kB' 'Active(anon): 130668 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121772 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172640 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81332 kB' 'KernelStack: 6448 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.208 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7100296 kB' 'MemUsed: 5141676 kB' 'SwapCached: 0 kB' 'Active: 889912 kB' 'Inactive: 2804048 kB' 'Active(anon): 130668 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 3573760 kB' 'Mapped: 48728 kB' 'AnonPages: 121772 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91308 kB' 'Slab: 172640 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.209 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.210 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:27.211 node0=512 expecting 512 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:11:27.211 00:11:27.211 real 0m0.505s 00:11:27.211 user 0m0.252s 00:11:27.211 sys 0m0.288s 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.211 13:29:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:27.211 ************************************ 00:11:27.211 END TEST custom_alloc 00:11:27.211 ************************************ 00:11:27.211 13:29:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:11:27.211 13:29:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:27.211 13:29:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.211 13:29:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:27.211 ************************************ 00:11:27.211 START TEST no_shrink_alloc 00:11:27.211 ************************************ 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:27.211 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:27.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.732 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.732 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.732 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6052684 kB' 'MemAvailable: 9424692 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890176 kB' 'Inactive: 2804048 kB' 'Active(anon): 130932 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122092 kB' 'Mapped: 48916 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172660 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81352 kB' 'KernelStack: 6452 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.733 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053204 kB' 'MemAvailable: 9425212 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 890340 kB' 'Inactive: 2804048 kB' 'Active(anon): 131096 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 122256 kB' 'Mapped: 48916 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172664 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81356 kB' 'KernelStack: 6436 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.734 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.735 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053204 kB' 'MemAvailable: 9425212 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889876 kB' 'Inactive: 2804048 kB' 'Active(anon): 130632 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 121996 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172664 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81356 kB' 'KernelStack: 6464 kB' 'PageTables: 4304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.736 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.737 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:27.738 nr_hugepages=1024 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:27.738 resv_hugepages=0 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:27.738 surplus_hugepages=0 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:27.738 anon_hugepages=0 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053344 kB' 'MemAvailable: 9425352 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 889916 kB' 'Inactive: 2804048 kB' 'Active(anon): 130672 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 121776 kB' 'Mapped: 48728 kB' 'Shmem: 10468 kB' 'KReclaimable: 91308 kB' 'Slab: 172656 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81348 kB' 'KernelStack: 6432 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.738 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.739 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053344 kB' 'MemUsed: 6188628 kB' 'SwapCached: 0 kB' 'Active: 890020 kB' 'Inactive: 2804048 kB' 'Active(anon): 130776 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 3573760 kB' 'Mapped: 48728 kB' 'AnonPages: 121884 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91308 kB' 'Slab: 172656 kB' 'SReclaimable: 91308 kB' 'SUnreclaim: 81348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.740 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:27.741 node0=1024 expecting 1024 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:11:27.741 13:29:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:27.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.999 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.999 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:28.290 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053792 kB' 'MemAvailable: 9425796 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 885740 kB' 'Inactive: 2804048 kB' 'Active(anon): 126496 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 117604 kB' 'Mapped: 48244 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 172516 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 81216 kB' 'KernelStack: 6332 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.290 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.291 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053568 kB' 'MemAvailable: 9425572 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 885400 kB' 'Inactive: 2804048 kB' 'Active(anon): 126156 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 117260 kB' 'Mapped: 48104 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 172540 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 81240 kB' 'KernelStack: 6292 kB' 'PageTables: 3668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.292 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.293 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053568 kB' 'MemAvailable: 9425572 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 885116 kB' 'Inactive: 2804048 kB' 'Active(anon): 125872 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 116976 kB' 'Mapped: 48172 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 172500 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 81200 kB' 'KernelStack: 6292 kB' 'PageTables: 3664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.294 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.295 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:11:28.296 nr_hugepages=1024 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:11:28.296 resv_hugepages=0 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:11:28.296 surplus_hugepages=0 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:11:28.296 anon_hugepages=0 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053568 kB' 'MemAvailable: 9425572 kB' 'Buffers: 2436 kB' 'Cached: 3571324 kB' 'SwapCached: 0 kB' 'Active: 885064 kB' 'Inactive: 2804048 kB' 'Active(anon): 125820 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 116932 kB' 'Mapped: 47988 kB' 'Shmem: 10468 kB' 'KReclaimable: 91300 kB' 'Slab: 172500 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 81200 kB' 'KernelStack: 6352 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 165740 kB' 'DirectMap2M: 6125568 kB' 'DirectMap1G: 8388608 kB' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.296 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.297 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6053568 kB' 'MemUsed: 6188404 kB' 'SwapCached: 0 kB' 'Active: 885324 kB' 'Inactive: 2804048 kB' 'Active(anon): 126080 kB' 'Inactive(anon): 0 kB' 'Active(file): 759244 kB' 'Inactive(file): 2804048 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 3573760 kB' 'Mapped: 47988 kB' 'AnonPages: 117192 kB' 'Shmem: 10468 kB' 'KernelStack: 6352 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91300 kB' 'Slab: 172500 kB' 'SReclaimable: 91300 kB' 'SUnreclaim: 81200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.298 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:11:28.299 node0=1024 expecting 1024 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:11:28.299 00:11:28.299 real 0m1.002s 00:11:28.299 user 0m0.479s 00:11:28.299 sys 0m0.589s 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.299 13:29:41 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:11:28.299 ************************************ 00:11:28.299 END TEST no_shrink_alloc 00:11:28.299 ************************************ 00:11:28.299 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:11:28.299 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:11:28.299 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:11:28.299 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:28.299 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:28.300 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:11:28.300 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:11:28.300 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:11:28.300 13:29:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:11:28.300 00:11:28.300 real 0m4.520s 00:11:28.300 user 0m2.130s 00:11:28.300 sys 0m2.431s 00:11:28.300 13:29:41 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.300 13:29:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:11:28.300 ************************************ 00:11:28.300 END TEST hugepages 00:11:28.300 ************************************ 00:11:28.300 13:29:41 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:28.300 13:29:41 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:28.300 13:29:41 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.300 13:29:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:28.300 ************************************ 00:11:28.300 START TEST driver 00:11:28.300 ************************************ 00:11:28.300 13:29:41 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:11:28.558 * Looking for test storage... 00:11:28.558 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:28.558 13:29:41 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:11:28.558 13:29:41 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:28.558 13:29:41 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:29.124 13:29:41 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:11:29.124 13:29:41 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:29.124 13:29:41 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:29.124 13:29:41 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:29.124 ************************************ 00:11:29.124 START TEST guess_driver 00:11:29.124 ************************************ 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:11:29.124 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:11:29.124 Looking for driver=uio_pci_generic 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:11:29.124 13:29:41 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:29.690 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:11:29.690 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:11:29.690 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:29.690 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:29.690 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:29.690 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:29.949 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:11:29.949 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:11:29.949 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:11:29.949 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:11:29.949 13:29:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:11:29.949 13:29:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:29.949 13:29:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:30.513 00:11:30.513 real 0m1.423s 00:11:30.513 user 0m0.537s 00:11:30.513 sys 0m0.902s 00:11:30.513 13:29:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:30.513 13:29:43 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:11:30.513 ************************************ 00:11:30.513 END TEST guess_driver 00:11:30.513 ************************************ 00:11:30.513 00:11:30.513 real 0m2.109s 00:11:30.513 user 0m0.779s 00:11:30.513 sys 0m1.402s 00:11:30.513 13:29:43 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:30.513 13:29:43 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:11:30.513 ************************************ 00:11:30.513 END TEST driver 00:11:30.513 ************************************ 00:11:30.513 13:29:43 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:30.513 13:29:43 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:30.513 13:29:43 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:30.513 13:29:43 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:30.513 ************************************ 00:11:30.513 START TEST devices 00:11:30.513 ************************************ 00:11:30.513 13:29:43 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:11:30.513 * Looking for test storage... 00:11:30.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:11:30.513 13:29:43 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:11:30.513 13:29:43 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:11:30.513 13:29:43 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:11:30.513 13:29:43 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:31.448 13:29:44 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:11:31.448 13:29:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:11:31.448 13:29:44 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:11:31.448 13:29:44 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:11:31.448 13:29:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:31.448 13:29:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:11:31.448 13:29:44 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:11:31.449 13:29:44 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:11:31.449 No valid GPT data, bailing 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:11:31.449 No valid GPT data, bailing 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:11:31.449 No valid GPT data, bailing 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:11:31.449 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:11:31.449 13:29:44 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:11:31.708 No valid GPT data, bailing 00:11:31.708 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:11:31.708 13:29:44 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:11:31.708 13:29:44 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:11:31.708 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:11:31.708 13:29:44 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:11:31.708 13:29:44 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:11:31.708 13:29:44 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:11:31.708 13:29:44 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:11:31.708 13:29:44 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:11:31.708 13:29:44 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:11:31.708 13:29:44 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:11:31.708 13:29:44 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:11:31.708 13:29:44 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:11:31.708 13:29:44 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:31.708 13:29:44 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:31.708 13:29:44 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:31.708 ************************************ 00:11:31.708 START TEST nvme_mount 00:11:31.708 ************************************ 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:31.708 13:29:44 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:11:32.643 Creating new GPT entries in memory. 00:11:32.643 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:32.643 other utilities. 00:11:32.643 13:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:32.643 13:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:32.643 13:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:32.643 13:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:32.643 13:29:45 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:33.578 Creating new GPT entries in memory. 00:11:33.578 The operation has completed successfully. 00:11:33.578 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:33.578 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:33.578 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 72183 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:33.836 13:29:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:34.094 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.094 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:34.094 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.094 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:34.352 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:34.352 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:34.610 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:34.610 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:34.610 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:34.610 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:34.610 13:29:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.869 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:35.126 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:35.126 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:11:35.126 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:35.126 13:29:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:11:35.126 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:11:35.126 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:35.126 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:35.127 13:29:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.385 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:35.643 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:35.643 00:11:35.643 real 0m3.948s 00:11:35.643 user 0m0.664s 00:11:35.643 sys 0m1.042s 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:35.643 13:29:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:11:35.643 ************************************ 00:11:35.643 END TEST nvme_mount 00:11:35.643 ************************************ 00:11:35.643 13:29:48 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:11:35.643 13:29:48 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:35.643 13:29:48 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:35.643 13:29:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:35.643 ************************************ 00:11:35.643 START TEST dm_mount 00:11:35.643 ************************************ 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:11:35.643 13:29:48 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:11:36.577 Creating new GPT entries in memory. 00:11:36.577 GPT data structures destroyed! You may now partition the disk using fdisk or 00:11:36.577 other utilities. 00:11:36.577 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:11:36.577 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:36.577 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:36.577 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:36.577 13:29:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:11:37.951 Creating new GPT entries in memory. 00:11:37.951 The operation has completed successfully. 00:11:37.951 13:29:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:37.951 13:29:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:37.951 13:29:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:11:37.951 13:29:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:11:37.951 13:29:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:11:38.887 The operation has completed successfully. 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72617 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.887 13:29:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:11:39.146 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:11:39.413 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.414 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:39.673 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:11:39.931 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:39.931 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:11:39.931 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:11:39.931 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:39.931 13:29:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:11:39.931 00:11:39.931 real 0m4.191s 00:11:39.931 user 0m0.456s 00:11:39.931 sys 0m0.703s 00:11:39.931 13:29:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:39.931 13:29:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:11:39.931 ************************************ 00:11:39.931 END TEST dm_mount 00:11:39.931 ************************************ 00:11:39.931 13:29:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:11:39.931 13:29:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:11:39.931 13:29:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:11:39.931 13:29:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:39.931 13:29:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:11:39.931 13:29:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:11:39.931 13:29:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:11:40.189 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:40.189 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:40.189 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:40.189 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:40.189 13:29:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:11:40.189 13:29:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:11:40.189 13:29:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:11:40.189 13:29:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:11:40.189 13:29:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:11:40.189 13:29:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:11:40.189 13:29:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:11:40.189 ************************************ 00:11:40.189 END TEST devices 00:11:40.189 ************************************ 00:11:40.189 00:11:40.189 real 0m9.650s 00:11:40.189 user 0m1.738s 00:11:40.189 sys 0m2.361s 00:11:40.189 13:29:53 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:40.189 13:29:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:11:40.189 ************************************ 00:11:40.189 END TEST setup.sh 00:11:40.189 ************************************ 00:11:40.189 00:11:40.189 real 0m21.207s 00:11:40.189 user 0m6.763s 00:11:40.189 sys 0m8.958s 00:11:40.189 13:29:53 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:40.189 13:29:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:11:40.189 13:29:53 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:11:40.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.773 Hugepages 00:11:40.773 node hugesize free / total 00:11:40.773 node0 1048576kB 0 / 0 00:11:40.773 node0 2048kB 2048 / 2048 00:11:40.773 00:11:40.773 Type BDF Vendor Device NUMA Driver Device Block devices 00:11:41.031 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:11:41.031 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:11:41.031 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:11:41.031 13:29:54 -- spdk/autotest.sh@130 -- # uname -s 00:11:41.031 13:29:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:11:41.031 13:29:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:11:41.031 13:29:54 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:41.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:41.855 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:41.855 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:41.855 13:29:54 -- common/autotest_common.sh@1528 -- # sleep 1 00:11:42.869 13:29:55 -- common/autotest_common.sh@1529 -- # bdfs=() 00:11:42.869 13:29:55 -- common/autotest_common.sh@1529 -- # local bdfs 00:11:42.869 13:29:55 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:11:42.869 13:29:55 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:11:42.869 13:29:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:42.869 13:29:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:42.869 13:29:55 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:42.869 13:29:55 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:42.869 13:29:55 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:42.869 13:29:55 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:11:42.870 13:29:55 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:42.870 13:29:55 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:43.435 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:43.435 Waiting for block devices as requested 00:11:43.435 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:43.435 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:43.435 13:29:56 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:11:43.435 13:29:56 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:11:43.435 13:29:56 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:11:43.435 13:29:56 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:11:43.435 13:29:56 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:11:43.435 13:29:56 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1541 -- # grep oacs 00:11:43.435 13:29:56 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:11:43.435 13:29:56 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:11:43.435 13:29:56 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:11:43.435 13:29:56 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:11:43.435 13:29:56 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:11:43.435 13:29:56 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:11:43.435 13:29:56 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:11:43.435 13:29:56 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:11:43.435 13:29:56 -- common/autotest_common.sh@1553 -- # continue 00:11:43.435 13:29:56 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:11:43.435 13:29:56 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:11:43.435 13:29:56 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:11:43.435 13:29:56 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:11:43.435 13:29:56 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:43.435 13:29:56 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:11:43.435 13:29:56 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:11:43.435 13:29:56 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:11:43.435 13:29:56 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:11:43.435 13:29:56 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:11:43.435 13:29:56 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:11:43.698 13:29:56 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:11:43.698 13:29:56 -- common/autotest_common.sh@1541 -- # grep oacs 00:11:43.698 13:29:56 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:11:43.698 13:29:56 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:11:43.698 13:29:56 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:11:43.698 13:29:56 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:11:43.698 13:29:56 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:11:43.698 13:29:56 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:11:43.698 13:29:56 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:11:43.698 13:29:56 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:11:43.698 13:29:56 -- common/autotest_common.sh@1553 -- # continue 00:11:43.698 13:29:56 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:11:43.698 13:29:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:43.698 13:29:56 -- common/autotest_common.sh@10 -- # set +x 00:11:43.698 13:29:56 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:11:43.698 13:29:56 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:43.698 13:29:56 -- common/autotest_common.sh@10 -- # set +x 00:11:43.698 13:29:56 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:44.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:44.276 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:44.276 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:44.534 13:29:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:11:44.534 13:29:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.534 13:29:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.534 13:29:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:11:44.534 13:29:57 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:11:44.534 13:29:57 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:11:44.534 13:29:57 -- common/autotest_common.sh@1573 -- # bdfs=() 00:11:44.534 13:29:57 -- common/autotest_common.sh@1573 -- # local bdfs 00:11:44.534 13:29:57 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:11:44.534 13:29:57 -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:44.534 13:29:57 -- common/autotest_common.sh@1509 -- # local bdfs 00:11:44.534 13:29:57 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:44.534 13:29:57 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:44.534 13:29:57 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:44.534 13:29:57 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:11:44.534 13:29:57 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:11:44.534 13:29:57 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:11:44.534 13:29:57 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:11:44.534 13:29:57 -- common/autotest_common.sh@1576 -- # device=0x0010 00:11:44.534 13:29:57 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:44.534 13:29:57 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:11:44.534 13:29:57 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:11:44.534 13:29:57 -- common/autotest_common.sh@1576 -- # device=0x0010 00:11:44.534 13:29:57 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:11:44.534 13:29:57 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:11:44.534 13:29:57 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:11:44.534 13:29:57 -- common/autotest_common.sh@1589 -- # return 0 00:11:44.534 13:29:57 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:11:44.534 13:29:57 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:11:44.534 13:29:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:44.534 13:29:57 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:11:44.534 13:29:57 -- spdk/autotest.sh@162 -- # timing_enter lib 00:11:44.534 13:29:57 -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:44.534 13:29:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.534 13:29:57 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:44.534 13:29:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:44.534 13:29:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:44.534 13:29:57 -- common/autotest_common.sh@10 -- # set +x 00:11:44.534 ************************************ 00:11:44.534 START TEST env 00:11:44.534 ************************************ 00:11:44.534 13:29:57 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:11:44.534 * Looking for test storage... 00:11:44.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:11:44.534 13:29:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:44.534 13:29:57 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:44.534 13:29:57 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:44.534 13:29:57 env -- common/autotest_common.sh@10 -- # set +x 00:11:44.792 ************************************ 00:11:44.792 START TEST env_memory 00:11:44.792 ************************************ 00:11:44.792 13:29:57 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:11:44.792 00:11:44.792 00:11:44.792 CUnit - A unit testing framework for C - Version 2.1-3 00:11:44.792 http://cunit.sourceforge.net/ 00:11:44.792 00:11:44.792 00:11:44.792 Suite: memory 00:11:44.792 Test: alloc and free memory map ...[2024-05-15 13:29:57.682799] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:11:44.792 passed 00:11:44.792 Test: mem map translation ...[2024-05-15 13:29:57.713545] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:11:44.792 [2024-05-15 13:29:57.713589] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:11:44.792 [2024-05-15 13:29:57.713653] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:11:44.792 [2024-05-15 13:29:57.713665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:11:44.792 passed 00:11:44.792 Test: mem map registration ...[2024-05-15 13:29:57.777379] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:11:44.792 [2024-05-15 13:29:57.777412] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:11:44.792 passed 00:11:44.792 Test: mem map adjacent registrations ...passed 00:11:44.792 00:11:44.792 Run Summary: Type Total Ran Passed Failed Inactive 00:11:44.792 suites 1 1 n/a 0 0 00:11:44.792 tests 4 4 4 0 0 00:11:44.792 asserts 152 152 152 0 n/a 00:11:44.792 00:11:44.792 Elapsed time = 0.215 seconds 00:11:44.792 00:11:44.792 real 0m0.229s 00:11:44.792 user 0m0.215s 00:11:44.792 sys 0m0.013s 00:11:44.792 13:29:57 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:44.792 13:29:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:11:44.792 ************************************ 00:11:44.792 END TEST env_memory 00:11:44.792 ************************************ 00:11:45.051 13:29:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:45.051 13:29:57 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:45.051 13:29:57 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:45.051 13:29:57 env -- common/autotest_common.sh@10 -- # set +x 00:11:45.051 ************************************ 00:11:45.051 START TEST env_vtophys 00:11:45.051 ************************************ 00:11:45.051 13:29:57 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:11:45.051 EAL: lib.eal log level changed from notice to debug 00:11:45.051 EAL: Detected lcore 0 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 1 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 2 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 3 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 4 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 5 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 6 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 7 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 8 as core 0 on socket 0 00:11:45.051 EAL: Detected lcore 9 as core 0 on socket 0 00:11:45.051 EAL: Maximum logical cores by configuration: 128 00:11:45.051 EAL: Detected CPU lcores: 10 00:11:45.051 EAL: Detected NUMA nodes: 1 00:11:45.051 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:11:45.051 EAL: Detected shared linkage of DPDK 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:11:45.051 EAL: Registered [vdev] bus. 00:11:45.051 EAL: bus.vdev log level changed from disabled to notice 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:11:45.051 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:11:45.051 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:11:45.051 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:11:45.051 EAL: No shared files mode enabled, IPC will be disabled 00:11:45.051 EAL: No shared files mode enabled, IPC is disabled 00:11:45.051 EAL: Selected IOVA mode 'PA' 00:11:45.051 EAL: Probing VFIO support... 00:11:45.051 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:45.051 EAL: VFIO modules not loaded, skipping VFIO support... 00:11:45.051 EAL: Ask a virtual area of 0x2e000 bytes 00:11:45.051 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:11:45.051 EAL: Setting up physically contiguous memory... 00:11:45.051 EAL: Setting maximum number of open files to 524288 00:11:45.051 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:11:45.051 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:11:45.051 EAL: Ask a virtual area of 0x61000 bytes 00:11:45.051 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:11:45.051 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:45.051 EAL: Ask a virtual area of 0x400000000 bytes 00:11:45.051 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:11:45.051 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:11:45.051 EAL: Ask a virtual area of 0x61000 bytes 00:11:45.051 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:11:45.051 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:45.051 EAL: Ask a virtual area of 0x400000000 bytes 00:11:45.051 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:11:45.051 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:11:45.051 EAL: Ask a virtual area of 0x61000 bytes 00:11:45.051 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:11:45.051 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:45.051 EAL: Ask a virtual area of 0x400000000 bytes 00:11:45.051 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:11:45.051 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:11:45.052 EAL: Ask a virtual area of 0x61000 bytes 00:11:45.052 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:11:45.052 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:11:45.052 EAL: Ask a virtual area of 0x400000000 bytes 00:11:45.052 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:11:45.052 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:11:45.052 EAL: Hugepages will be freed exactly as allocated. 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: TSC frequency is ~2200000 KHz 00:11:45.052 EAL: Main lcore 0 is ready (tid=7f8454124a00;cpuset=[0]) 00:11:45.052 EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.052 EAL: Restoring previous memory policy: 0 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was expanded by 2MB 00:11:45.052 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: No PCI address specified using 'addr=' in: bus=pci 00:11:45.052 EAL: Mem event callback 'spdk:(nil)' registered 00:11:45.052 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:11:45.052 00:11:45.052 00:11:45.052 CUnit - A unit testing framework for C - Version 2.1-3 00:11:45.052 http://cunit.sourceforge.net/ 00:11:45.052 00:11:45.052 00:11:45.052 Suite: components_suite 00:11:45.052 Test: vtophys_malloc_test ...passed 00:11:45.052 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.052 EAL: Restoring previous memory policy: 4 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was expanded by 4MB 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was shrunk by 4MB 00:11:45.052 EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.052 EAL: Restoring previous memory policy: 4 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was expanded by 6MB 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was shrunk by 6MB 00:11:45.052 EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.052 EAL: Restoring previous memory policy: 4 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was expanded by 10MB 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was shrunk by 10MB 00:11:45.052 EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.052 EAL: Restoring previous memory policy: 4 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was expanded by 18MB 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was shrunk by 18MB 00:11:45.052 EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.052 EAL: Restoring previous memory policy: 4 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was expanded by 34MB 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was shrunk by 34MB 00:11:45.052 EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.052 EAL: Restoring previous memory policy: 4 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was expanded by 66MB 00:11:45.052 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.052 EAL: request: mp_malloc_sync 00:11:45.052 EAL: No shared files mode enabled, IPC is disabled 00:11:45.052 EAL: Heap on socket 0 was shrunk by 66MB 00:11:45.052 EAL: Trying to obtain current memory policy. 00:11:45.052 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.311 EAL: Restoring previous memory policy: 4 00:11:45.311 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.311 EAL: request: mp_malloc_sync 00:11:45.311 EAL: No shared files mode enabled, IPC is disabled 00:11:45.311 EAL: Heap on socket 0 was expanded by 130MB 00:11:45.311 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.311 EAL: request: mp_malloc_sync 00:11:45.311 EAL: No shared files mode enabled, IPC is disabled 00:11:45.311 EAL: Heap on socket 0 was shrunk by 130MB 00:11:45.311 EAL: Trying to obtain current memory policy. 00:11:45.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.311 EAL: Restoring previous memory policy: 4 00:11:45.311 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.311 EAL: request: mp_malloc_sync 00:11:45.311 EAL: No shared files mode enabled, IPC is disabled 00:11:45.311 EAL: Heap on socket 0 was expanded by 258MB 00:11:45.311 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.311 EAL: request: mp_malloc_sync 00:11:45.311 EAL: No shared files mode enabled, IPC is disabled 00:11:45.311 EAL: Heap on socket 0 was shrunk by 258MB 00:11:45.311 EAL: Trying to obtain current memory policy. 00:11:45.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:45.570 EAL: Restoring previous memory policy: 4 00:11:45.570 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.570 EAL: request: mp_malloc_sync 00:11:45.570 EAL: No shared files mode enabled, IPC is disabled 00:11:45.570 EAL: Heap on socket 0 was expanded by 514MB 00:11:45.570 EAL: Calling mem event callback 'spdk:(nil)' 00:11:45.828 EAL: request: mp_malloc_sync 00:11:45.828 EAL: No shared files mode enabled, IPC is disabled 00:11:45.828 EAL: Heap on socket 0 was shrunk by 514MB 00:11:45.828 EAL: Trying to obtain current memory policy. 00:11:45.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:11:46.087 EAL: Restoring previous memory policy: 4 00:11:46.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:46.087 EAL: request: mp_malloc_sync 00:11:46.087 EAL: No shared files mode enabled, IPC is disabled 00:11:46.087 EAL: Heap on socket 0 was expanded by 1026MB 00:11:46.087 EAL: Calling mem event callback 'spdk:(nil)' 00:11:46.345 passed 00:11:46.345 00:11:46.345 Run Summary: Type Total Ran Passed Failed Inactive 00:11:46.345 suites 1 1 n/a 0 0 00:11:46.345 tests 2 2 2 0 0 00:11:46.345 asserts 5225 5225 5225 0 n/a 00:11:46.345 00:11:46.345 Elapsed time = 1.242 seconds 00:11:46.345 EAL: request: mp_malloc_sync 00:11:46.345 EAL: No shared files mode enabled, IPC is disabled 00:11:46.345 EAL: Heap on socket 0 was shrunk by 1026MB 00:11:46.345 EAL: Calling mem event callback 'spdk:(nil)' 00:11:46.345 EAL: request: mp_malloc_sync 00:11:46.345 EAL: No shared files mode enabled, IPC is disabled 00:11:46.345 EAL: Heap on socket 0 was shrunk by 2MB 00:11:46.345 EAL: No shared files mode enabled, IPC is disabled 00:11:46.345 EAL: No shared files mode enabled, IPC is disabled 00:11:46.345 EAL: No shared files mode enabled, IPC is disabled 00:11:46.345 00:11:46.345 real 0m1.438s 00:11:46.345 user 0m0.787s 00:11:46.345 sys 0m0.522s 00:11:46.345 13:29:59 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.345 13:29:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:11:46.345 ************************************ 00:11:46.345 END TEST env_vtophys 00:11:46.345 ************************************ 00:11:46.345 13:29:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:46.345 13:29:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:46.345 13:29:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.345 13:29:59 env -- common/autotest_common.sh@10 -- # set +x 00:11:46.345 ************************************ 00:11:46.345 START TEST env_pci 00:11:46.345 ************************************ 00:11:46.345 13:29:59 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:11:46.345 00:11:46.345 00:11:46.345 CUnit - A unit testing framework for C - Version 2.1-3 00:11:46.345 http://cunit.sourceforge.net/ 00:11:46.345 00:11:46.345 00:11:46.345 Suite: pci 00:11:46.345 Test: pci_hook ...[2024-05-15 13:29:59.421847] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 73806 has claimed it 00:11:46.345 passed 00:11:46.345 00:11:46.346 Run Summary: Type Total Ran Passed Failed Inactive 00:11:46.346 suites 1 1 n/a 0 0 00:11:46.346 tests 1 1 1 0 0 00:11:46.346 asserts 25 25 25 0 n/a 00:11:46.346 00:11:46.346 Elapsed time = 0.002 seconds 00:11:46.346 EAL: Cannot find device (10000:00:01.0) 00:11:46.346 EAL: Failed to attach device on primary process 00:11:46.346 00:11:46.346 real 0m0.019s 00:11:46.346 user 0m0.010s 00:11:46.346 sys 0m0.009s 00:11:46.346 13:29:59 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.346 13:29:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:11:46.346 ************************************ 00:11:46.346 END TEST env_pci 00:11:46.346 ************************************ 00:11:46.604 13:29:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:11:46.604 13:29:59 env -- env/env.sh@15 -- # uname 00:11:46.604 13:29:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:11:46.604 13:29:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:11:46.604 13:29:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:46.604 13:29:59 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:46.604 13:29:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.604 13:29:59 env -- common/autotest_common.sh@10 -- # set +x 00:11:46.604 ************************************ 00:11:46.604 START TEST env_dpdk_post_init 00:11:46.604 ************************************ 00:11:46.604 13:29:59 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:11:46.604 EAL: Detected CPU lcores: 10 00:11:46.604 EAL: Detected NUMA nodes: 1 00:11:46.604 EAL: Detected shared linkage of DPDK 00:11:46.604 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:46.604 EAL: Selected IOVA mode 'PA' 00:11:46.604 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:46.604 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:11:46.604 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:11:46.604 Starting DPDK initialization... 00:11:46.604 Starting SPDK post initialization... 00:11:46.604 SPDK NVMe probe 00:11:46.604 Attaching to 0000:00:10.0 00:11:46.604 Attaching to 0000:00:11.0 00:11:46.604 Attached to 0000:00:10.0 00:11:46.604 Attached to 0000:00:11.0 00:11:46.604 Cleaning up... 00:11:46.604 00:11:46.604 real 0m0.181s 00:11:46.604 user 0m0.040s 00:11:46.604 sys 0m0.041s 00:11:46.604 13:29:59 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.604 13:29:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:11:46.604 ************************************ 00:11:46.604 END TEST env_dpdk_post_init 00:11:46.604 ************************************ 00:11:46.863 13:29:59 env -- env/env.sh@26 -- # uname 00:11:46.863 13:29:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:11:46.863 13:29:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:46.863 13:29:59 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:46.863 13:29:59 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.863 13:29:59 env -- common/autotest_common.sh@10 -- # set +x 00:11:46.863 ************************************ 00:11:46.863 START TEST env_mem_callbacks 00:11:46.863 ************************************ 00:11:46.863 13:29:59 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:11:46.863 EAL: Detected CPU lcores: 10 00:11:46.863 EAL: Detected NUMA nodes: 1 00:11:46.863 EAL: Detected shared linkage of DPDK 00:11:46.863 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:11:46.863 EAL: Selected IOVA mode 'PA' 00:11:46.863 00:11:46.863 00:11:46.863 CUnit - A unit testing framework for C - Version 2.1-3 00:11:46.863 http://cunit.sourceforge.net/ 00:11:46.863 00:11:46.863 00:11:46.863 Suite: memory 00:11:46.863 Test: test ... 00:11:46.863 TELEMETRY: No legacy callbacks, legacy socket not created 00:11:46.863 register 0x200000200000 2097152 00:11:46.863 malloc 3145728 00:11:46.863 register 0x200000400000 4194304 00:11:46.863 buf 0x200000500000 len 3145728 PASSED 00:11:46.863 malloc 64 00:11:46.863 buf 0x2000004fff40 len 64 PASSED 00:11:46.863 malloc 4194304 00:11:46.863 register 0x200000800000 6291456 00:11:46.863 buf 0x200000a00000 len 4194304 PASSED 00:11:46.863 free 0x200000500000 3145728 00:11:46.863 free 0x2000004fff40 64 00:11:46.863 unregister 0x200000400000 4194304 PASSED 00:11:46.863 free 0x200000a00000 4194304 00:11:46.863 unregister 0x200000800000 6291456 PASSED 00:11:46.863 malloc 8388608 00:11:46.863 register 0x200000400000 10485760 00:11:46.863 buf 0x200000600000 len 8388608 PASSED 00:11:46.863 free 0x200000600000 8388608 00:11:46.863 unregister 0x200000400000 10485760 PASSED 00:11:46.863 passed 00:11:46.863 00:11:46.863 Run Summary: Type Total Ran Passed Failed Inactive 00:11:46.863 suites 1 1 n/a 0 0 00:11:46.863 tests 1 1 1 0 0 00:11:46.863 asserts 15 15 15 0 n/a 00:11:46.863 00:11:46.863 Elapsed time = 0.007 seconds 00:11:46.863 00:11:46.863 real 0m0.141s 00:11:46.863 user 0m0.017s 00:11:46.863 sys 0m0.022s 00:11:46.863 13:29:59 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.863 ************************************ 00:11:46.863 END TEST env_mem_callbacks 00:11:46.863 ************************************ 00:11:46.863 13:29:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:11:46.863 00:11:46.863 real 0m2.352s 00:11:46.863 user 0m1.182s 00:11:46.863 sys 0m0.812s 00:11:46.863 13:29:59 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:46.863 ************************************ 00:11:46.863 END TEST env 00:11:46.863 13:29:59 env -- common/autotest_common.sh@10 -- # set +x 00:11:46.863 ************************************ 00:11:46.863 13:29:59 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:46.863 13:29:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:46.863 13:29:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:46.863 13:29:59 -- common/autotest_common.sh@10 -- # set +x 00:11:46.863 ************************************ 00:11:46.863 START TEST rpc 00:11:46.863 ************************************ 00:11:46.863 13:29:59 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:11:47.121 * Looking for test storage... 00:11:47.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:47.121 13:30:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=73915 00:11:47.121 13:30:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:47.121 13:30:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:11:47.121 13:30:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 73915 00:11:47.121 13:30:00 rpc -- common/autotest_common.sh@827 -- # '[' -z 73915 ']' 00:11:47.121 13:30:00 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.121 13:30:00 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:47.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.121 13:30:00 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.121 13:30:00 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:47.121 13:30:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.121 [2024-05-15 13:30:00.090498] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:47.121 [2024-05-15 13:30:00.090627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73915 ] 00:11:47.121 [2024-05-15 13:30:00.213933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:47.379 [2024-05-15 13:30:00.233454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.379 [2024-05-15 13:30:00.335062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:11:47.379 [2024-05-15 13:30:00.335138] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 73915' to capture a snapshot of events at runtime. 00:11:47.379 [2024-05-15 13:30:00.335160] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.379 [2024-05-15 13:30:00.335175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.379 [2024-05-15 13:30:00.335188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid73915 for offline analysis/debug. 00:11:47.379 [2024-05-15 13:30:00.335228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.314 13:30:01 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:48.314 13:30:01 rpc -- common/autotest_common.sh@860 -- # return 0 00:11:48.314 13:30:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:48.314 13:30:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:11:48.314 13:30:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:11:48.314 13:30:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:11:48.314 13:30:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:48.314 13:30:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.314 13:30:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 ************************************ 00:11:48.314 START TEST rpc_integrity 00:11:48.314 ************************************ 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:48.314 { 00:11:48.314 "aliases": [ 00:11:48.314 "898f780d-3dc5-4d07-a5c6-9559ac853638" 00:11:48.314 ], 00:11:48.314 "assigned_rate_limits": { 00:11:48.314 "r_mbytes_per_sec": 0, 00:11:48.314 "rw_ios_per_sec": 0, 00:11:48.314 "rw_mbytes_per_sec": 0, 00:11:48.314 "w_mbytes_per_sec": 0 00:11:48.314 }, 00:11:48.314 "block_size": 512, 00:11:48.314 "claimed": false, 00:11:48.314 "driver_specific": {}, 00:11:48.314 "memory_domains": [ 00:11:48.314 { 00:11:48.314 "dma_device_id": "system", 00:11:48.314 "dma_device_type": 1 00:11:48.314 }, 00:11:48.314 { 00:11:48.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.314 "dma_device_type": 2 00:11:48.314 } 00:11:48.314 ], 00:11:48.314 "name": "Malloc0", 00:11:48.314 "num_blocks": 16384, 00:11:48.314 "product_name": "Malloc disk", 00:11:48.314 "supported_io_types": { 00:11:48.314 "abort": true, 00:11:48.314 "compare": false, 00:11:48.314 "compare_and_write": false, 00:11:48.314 "flush": true, 00:11:48.314 "nvme_admin": false, 00:11:48.314 "nvme_io": false, 00:11:48.314 "read": true, 00:11:48.314 "reset": true, 00:11:48.314 "unmap": true, 00:11:48.314 "write": true, 00:11:48.314 "write_zeroes": true 00:11:48.314 }, 00:11:48.314 "uuid": "898f780d-3dc5-4d07-a5c6-9559ac853638", 00:11:48.314 "zoned": false 00:11:48.314 } 00:11:48.314 ]' 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 [2024-05-15 13:30:01.226673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:11:48.314 [2024-05-15 13:30:01.226735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.314 [2024-05-15 13:30:01.226754] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a58260 00:11:48.314 [2024-05-15 13:30:01.226764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.314 [2024-05-15 13:30:01.228447] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.314 [2024-05-15 13:30:01.228482] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:48.314 Passthru0 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:48.314 { 00:11:48.314 "aliases": [ 00:11:48.314 "898f780d-3dc5-4d07-a5c6-9559ac853638" 00:11:48.314 ], 00:11:48.314 "assigned_rate_limits": { 00:11:48.314 "r_mbytes_per_sec": 0, 00:11:48.314 "rw_ios_per_sec": 0, 00:11:48.314 "rw_mbytes_per_sec": 0, 00:11:48.314 "w_mbytes_per_sec": 0 00:11:48.314 }, 00:11:48.314 "block_size": 512, 00:11:48.314 "claim_type": "exclusive_write", 00:11:48.314 "claimed": true, 00:11:48.314 "driver_specific": {}, 00:11:48.314 "memory_domains": [ 00:11:48.314 { 00:11:48.314 "dma_device_id": "system", 00:11:48.314 "dma_device_type": 1 00:11:48.314 }, 00:11:48.314 { 00:11:48.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.314 "dma_device_type": 2 00:11:48.314 } 00:11:48.314 ], 00:11:48.314 "name": "Malloc0", 00:11:48.314 "num_blocks": 16384, 00:11:48.314 "product_name": "Malloc disk", 00:11:48.314 "supported_io_types": { 00:11:48.314 "abort": true, 00:11:48.314 "compare": false, 00:11:48.314 "compare_and_write": false, 00:11:48.314 "flush": true, 00:11:48.314 "nvme_admin": false, 00:11:48.314 "nvme_io": false, 00:11:48.314 "read": true, 00:11:48.314 "reset": true, 00:11:48.314 "unmap": true, 00:11:48.314 "write": true, 00:11:48.314 "write_zeroes": true 00:11:48.314 }, 00:11:48.314 "uuid": "898f780d-3dc5-4d07-a5c6-9559ac853638", 00:11:48.314 "zoned": false 00:11:48.314 }, 00:11:48.314 { 00:11:48.314 "aliases": [ 00:11:48.314 "eb742c40-903a-56b1-afd8-d674ecbede35" 00:11:48.314 ], 00:11:48.314 "assigned_rate_limits": { 00:11:48.314 "r_mbytes_per_sec": 0, 00:11:48.314 "rw_ios_per_sec": 0, 00:11:48.314 "rw_mbytes_per_sec": 0, 00:11:48.314 "w_mbytes_per_sec": 0 00:11:48.314 }, 00:11:48.314 "block_size": 512, 00:11:48.314 "claimed": false, 00:11:48.314 "driver_specific": { 00:11:48.314 "passthru": { 00:11:48.314 "base_bdev_name": "Malloc0", 00:11:48.314 "name": "Passthru0" 00:11:48.314 } 00:11:48.314 }, 00:11:48.314 "memory_domains": [ 00:11:48.314 { 00:11:48.314 "dma_device_id": "system", 00:11:48.314 "dma_device_type": 1 00:11:48.314 }, 00:11:48.314 { 00:11:48.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.314 "dma_device_type": 2 00:11:48.314 } 00:11:48.314 ], 00:11:48.314 "name": "Passthru0", 00:11:48.314 "num_blocks": 16384, 00:11:48.314 "product_name": "passthru", 00:11:48.314 "supported_io_types": { 00:11:48.314 "abort": true, 00:11:48.314 "compare": false, 00:11:48.314 "compare_and_write": false, 00:11:48.314 "flush": true, 00:11:48.314 "nvme_admin": false, 00:11:48.314 "nvme_io": false, 00:11:48.314 "read": true, 00:11:48.314 "reset": true, 00:11:48.314 "unmap": true, 00:11:48.314 "write": true, 00:11:48.314 "write_zeroes": true 00:11:48.314 }, 00:11:48.314 "uuid": "eb742c40-903a-56b1-afd8-d674ecbede35", 00:11:48.314 "zoned": false 00:11:48.314 } 00:11:48.314 ]' 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:48.314 13:30:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:48.314 00:11:48.314 real 0m0.328s 00:11:48.314 user 0m0.203s 00:11:48.314 sys 0m0.041s 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:48.314 13:30:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:48.314 ************************************ 00:11:48.314 END TEST rpc_integrity 00:11:48.314 ************************************ 00:11:48.596 13:30:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:11:48.596 13:30:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:48.596 13:30:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.596 13:30:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 ************************************ 00:11:48.596 START TEST rpc_plugins 00:11:48.596 ************************************ 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:11:48.596 { 00:11:48.596 "aliases": [ 00:11:48.596 "51a6714b-97cf-47aa-b7af-4950451067c4" 00:11:48.596 ], 00:11:48.596 "assigned_rate_limits": { 00:11:48.596 "r_mbytes_per_sec": 0, 00:11:48.596 "rw_ios_per_sec": 0, 00:11:48.596 "rw_mbytes_per_sec": 0, 00:11:48.596 "w_mbytes_per_sec": 0 00:11:48.596 }, 00:11:48.596 "block_size": 4096, 00:11:48.596 "claimed": false, 00:11:48.596 "driver_specific": {}, 00:11:48.596 "memory_domains": [ 00:11:48.596 { 00:11:48.596 "dma_device_id": "system", 00:11:48.596 "dma_device_type": 1 00:11:48.596 }, 00:11:48.596 { 00:11:48.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.596 "dma_device_type": 2 00:11:48.596 } 00:11:48.596 ], 00:11:48.596 "name": "Malloc1", 00:11:48.596 "num_blocks": 256, 00:11:48.596 "product_name": "Malloc disk", 00:11:48.596 "supported_io_types": { 00:11:48.596 "abort": true, 00:11:48.596 "compare": false, 00:11:48.596 "compare_and_write": false, 00:11:48.596 "flush": true, 00:11:48.596 "nvme_admin": false, 00:11:48.596 "nvme_io": false, 00:11:48.596 "read": true, 00:11:48.596 "reset": true, 00:11:48.596 "unmap": true, 00:11:48.596 "write": true, 00:11:48.596 "write_zeroes": true 00:11:48.596 }, 00:11:48.596 "uuid": "51a6714b-97cf-47aa-b7af-4950451067c4", 00:11:48.596 "zoned": false 00:11:48.596 } 00:11:48.596 ]' 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:11:48.596 13:30:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:11:48.596 00:11:48.596 real 0m0.168s 00:11:48.596 user 0m0.107s 00:11:48.596 sys 0m0.023s 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:48.596 13:30:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 ************************************ 00:11:48.596 END TEST rpc_plugins 00:11:48.596 ************************************ 00:11:48.596 13:30:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:11:48.596 13:30:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:48.596 13:30:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:48.596 13:30:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 ************************************ 00:11:48.596 START TEST rpc_trace_cmd_test 00:11:48.596 ************************************ 00:11:48.596 13:30:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:11:48.596 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:11:48.596 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:11:48.596 13:30:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.596 13:30:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.596 13:30:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.596 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:11:48.596 "bdev": { 00:11:48.596 "mask": "0x8", 00:11:48.596 "tpoint_mask": "0xffffffffffffffff" 00:11:48.596 }, 00:11:48.596 "bdev_nvme": { 00:11:48.596 "mask": "0x4000", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "blobfs": { 00:11:48.596 "mask": "0x80", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "dsa": { 00:11:48.596 "mask": "0x200", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "ftl": { 00:11:48.596 "mask": "0x40", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "iaa": { 00:11:48.596 "mask": "0x1000", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "iscsi_conn": { 00:11:48.596 "mask": "0x2", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "nvme_pcie": { 00:11:48.596 "mask": "0x800", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "nvme_tcp": { 00:11:48.596 "mask": "0x2000", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "nvmf_rdma": { 00:11:48.596 "mask": "0x10", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "nvmf_tcp": { 00:11:48.596 "mask": "0x20", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "scsi": { 00:11:48.596 "mask": "0x4", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "sock": { 00:11:48.596 "mask": "0x8000", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.596 }, 00:11:48.596 "thread": { 00:11:48.596 "mask": "0x400", 00:11:48.596 "tpoint_mask": "0x0" 00:11:48.597 }, 00:11:48.597 "tpoint_group_mask": "0x8", 00:11:48.597 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid73915" 00:11:48.597 }' 00:11:48.597 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:11:48.854 00:11:48.854 real 0m0.276s 00:11:48.854 user 0m0.230s 00:11:48.854 sys 0m0.034s 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:48.854 13:30:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 ************************************ 00:11:48.854 END TEST rpc_trace_cmd_test 00:11:48.854 ************************************ 00:11:49.113 13:30:01 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:11:49.113 13:30:01 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:11:49.113 13:30:01 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:49.113 13:30:01 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:49.113 13:30:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 ************************************ 00:11:49.113 START TEST go_rpc 00:11:49.113 ************************************ 00:11:49.113 13:30:01 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:11:49.113 13:30:01 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["ac73fdec-265d-4755-8282-a62ec5128939"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"ac73fdec-265d-4755-8282-a62ec5128939","zoned":false}]' 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:11:49.113 13:30:02 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:11:49.113 00:11:49.113 real 0m0.213s 00:11:49.113 user 0m0.138s 00:11:49.113 sys 0m0.035s 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:49.113 13:30:02 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 ************************************ 00:11:49.113 END TEST go_rpc 00:11:49.113 ************************************ 00:11:49.372 13:30:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:11:49.372 13:30:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:11:49.372 13:30:02 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:49.372 13:30:02 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:49.372 13:30:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:49.372 ************************************ 00:11:49.372 START TEST rpc_daemon_integrity 00:11:49.372 ************************************ 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:11:49.372 { 00:11:49.372 "aliases": [ 00:11:49.372 "7ca111ab-bbde-4eef-9c8c-ce02e4c44de9" 00:11:49.372 ], 00:11:49.372 "assigned_rate_limits": { 00:11:49.372 "r_mbytes_per_sec": 0, 00:11:49.372 "rw_ios_per_sec": 0, 00:11:49.372 "rw_mbytes_per_sec": 0, 00:11:49.372 "w_mbytes_per_sec": 0 00:11:49.372 }, 00:11:49.372 "block_size": 512, 00:11:49.372 "claimed": false, 00:11:49.372 "driver_specific": {}, 00:11:49.372 "memory_domains": [ 00:11:49.372 { 00:11:49.372 "dma_device_id": "system", 00:11:49.372 "dma_device_type": 1 00:11:49.372 }, 00:11:49.372 { 00:11:49.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.372 "dma_device_type": 2 00:11:49.372 } 00:11:49.372 ], 00:11:49.372 "name": "Malloc3", 00:11:49.372 "num_blocks": 16384, 00:11:49.372 "product_name": "Malloc disk", 00:11:49.372 "supported_io_types": { 00:11:49.372 "abort": true, 00:11:49.372 "compare": false, 00:11:49.372 "compare_and_write": false, 00:11:49.372 "flush": true, 00:11:49.372 "nvme_admin": false, 00:11:49.372 "nvme_io": false, 00:11:49.372 "read": true, 00:11:49.372 "reset": true, 00:11:49.372 "unmap": true, 00:11:49.372 "write": true, 00:11:49.372 "write_zeroes": true 00:11:49.372 }, 00:11:49.372 "uuid": "7ca111ab-bbde-4eef-9c8c-ce02e4c44de9", 00:11:49.372 "zoned": false 00:11:49.372 } 00:11:49.372 ]' 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.372 [2024-05-15 13:30:02.411963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:49.372 [2024-05-15 13:30:02.412013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.372 [2024-05-15 13:30:02.412032] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a58c20 00:11:49.372 [2024-05-15 13:30:02.412042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.372 [2024-05-15 13:30:02.413621] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.372 [2024-05-15 13:30:02.413652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:11:49.372 Passthru0 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.372 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:11:49.372 { 00:11:49.372 "aliases": [ 00:11:49.372 "7ca111ab-bbde-4eef-9c8c-ce02e4c44de9" 00:11:49.372 ], 00:11:49.372 "assigned_rate_limits": { 00:11:49.372 "r_mbytes_per_sec": 0, 00:11:49.372 "rw_ios_per_sec": 0, 00:11:49.372 "rw_mbytes_per_sec": 0, 00:11:49.372 "w_mbytes_per_sec": 0 00:11:49.372 }, 00:11:49.372 "block_size": 512, 00:11:49.372 "claim_type": "exclusive_write", 00:11:49.372 "claimed": true, 00:11:49.372 "driver_specific": {}, 00:11:49.372 "memory_domains": [ 00:11:49.372 { 00:11:49.372 "dma_device_id": "system", 00:11:49.372 "dma_device_type": 1 00:11:49.372 }, 00:11:49.372 { 00:11:49.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.372 "dma_device_type": 2 00:11:49.372 } 00:11:49.372 ], 00:11:49.372 "name": "Malloc3", 00:11:49.372 "num_blocks": 16384, 00:11:49.372 "product_name": "Malloc disk", 00:11:49.372 "supported_io_types": { 00:11:49.372 "abort": true, 00:11:49.372 "compare": false, 00:11:49.372 "compare_and_write": false, 00:11:49.372 "flush": true, 00:11:49.372 "nvme_admin": false, 00:11:49.372 "nvme_io": false, 00:11:49.372 "read": true, 00:11:49.372 "reset": true, 00:11:49.372 "unmap": true, 00:11:49.372 "write": true, 00:11:49.372 "write_zeroes": true 00:11:49.372 }, 00:11:49.372 "uuid": "7ca111ab-bbde-4eef-9c8c-ce02e4c44de9", 00:11:49.372 "zoned": false 00:11:49.372 }, 00:11:49.372 { 00:11:49.372 "aliases": [ 00:11:49.372 "974d5453-df55-50ce-a4cb-c7e0361e6172" 00:11:49.372 ], 00:11:49.372 "assigned_rate_limits": { 00:11:49.372 "r_mbytes_per_sec": 0, 00:11:49.372 "rw_ios_per_sec": 0, 00:11:49.372 "rw_mbytes_per_sec": 0, 00:11:49.372 "w_mbytes_per_sec": 0 00:11:49.372 }, 00:11:49.372 "block_size": 512, 00:11:49.372 "claimed": false, 00:11:49.372 "driver_specific": { 00:11:49.372 "passthru": { 00:11:49.372 "base_bdev_name": "Malloc3", 00:11:49.372 "name": "Passthru0" 00:11:49.372 } 00:11:49.372 }, 00:11:49.372 "memory_domains": [ 00:11:49.372 { 00:11:49.372 "dma_device_id": "system", 00:11:49.372 "dma_device_type": 1 00:11:49.372 }, 00:11:49.372 { 00:11:49.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.372 "dma_device_type": 2 00:11:49.372 } 00:11:49.372 ], 00:11:49.372 "name": "Passthru0", 00:11:49.372 "num_blocks": 16384, 00:11:49.372 "product_name": "passthru", 00:11:49.372 "supported_io_types": { 00:11:49.372 "abort": true, 00:11:49.373 "compare": false, 00:11:49.373 "compare_and_write": false, 00:11:49.373 "flush": true, 00:11:49.373 "nvme_admin": false, 00:11:49.373 "nvme_io": false, 00:11:49.373 "read": true, 00:11:49.373 "reset": true, 00:11:49.373 "unmap": true, 00:11:49.373 "write": true, 00:11:49.373 "write_zeroes": true 00:11:49.373 }, 00:11:49.373 "uuid": "974d5453-df55-50ce-a4cb-c7e0361e6172", 00:11:49.373 "zoned": false 00:11:49.373 } 00:11:49.373 ]' 00:11:49.373 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:11:49.631 00:11:49.631 real 0m0.327s 00:11:49.631 user 0m0.208s 00:11:49.631 sys 0m0.044s 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:49.631 13:30:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:11:49.631 ************************************ 00:11:49.631 END TEST rpc_daemon_integrity 00:11:49.631 ************************************ 00:11:49.631 13:30:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:11:49.631 13:30:02 rpc -- rpc/rpc.sh@84 -- # killprocess 73915 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@946 -- # '[' -z 73915 ']' 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@950 -- # kill -0 73915 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@951 -- # uname 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 73915 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:49.631 killing process with pid 73915 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 73915' 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@965 -- # kill 73915 00:11:49.631 13:30:02 rpc -- common/autotest_common.sh@970 -- # wait 73915 00:11:50.197 00:11:50.197 real 0m3.075s 00:11:50.197 user 0m4.017s 00:11:50.197 sys 0m0.772s 00:11:50.197 13:30:03 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:50.197 ************************************ 00:11:50.197 END TEST rpc 00:11:50.197 13:30:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.197 ************************************ 00:11:50.197 13:30:03 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:50.197 13:30:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:50.197 13:30:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.197 13:30:03 -- common/autotest_common.sh@10 -- # set +x 00:11:50.197 ************************************ 00:11:50.197 START TEST skip_rpc 00:11:50.197 ************************************ 00:11:50.197 13:30:03 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:11:50.197 * Looking for test storage... 00:11:50.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:11:50.197 13:30:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:50.197 13:30:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:11:50.197 13:30:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:11:50.197 13:30:03 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:50.197 13:30:03 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.197 13:30:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:50.197 ************************************ 00:11:50.197 START TEST skip_rpc 00:11:50.197 ************************************ 00:11:50.197 13:30:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:11:50.197 13:30:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74176 00:11:50.197 13:30:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:11:50.197 13:30:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:50.197 13:30:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:11:50.197 [2024-05-15 13:30:03.223271] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:50.197 [2024-05-15 13:30:03.223370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74176 ] 00:11:50.455 [2024-05-15 13:30:03.346315] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:50.455 [2024-05-15 13:30:03.366236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.455 [2024-05-15 13:30:03.470299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.745 2024/05/15 13:30:08 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74176 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 74176 ']' 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 74176 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74176 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:55.745 killing process with pid 74176 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74176' 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 74176 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 74176 00:11:55.745 00:11:55.745 real 0m5.410s 00:11:55.745 user 0m5.002s 00:11:55.745 sys 0m0.299s 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:55.745 13:30:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.745 ************************************ 00:11:55.745 END TEST skip_rpc 00:11:55.745 ************************************ 00:11:55.745 13:30:08 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:11:55.745 13:30:08 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:55.745 13:30:08 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:55.745 13:30:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.745 ************************************ 00:11:55.745 START TEST skip_rpc_with_json 00:11:55.745 ************************************ 00:11:55.745 13:30:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:11:55.745 13:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:11:55.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.745 13:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74270 00:11:55.745 13:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74270 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 74270 ']' 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:55.746 13:30:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:55.746 [2024-05-15 13:30:08.679682] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:11:55.746 [2024-05-15 13:30:08.680002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74270 ] 00:11:55.746 [2024-05-15 13:30:08.802023] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:55.746 [2024-05-15 13:30:08.822333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.025 [2024-05-15 13:30:08.925219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:56.673 [2024-05-15 13:30:09.677786] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:11:56.673 2024/05/15 13:30:09 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:11:56.673 request: 00:11:56.673 { 00:11:56.673 "method": "nvmf_get_transports", 00:11:56.673 "params": { 00:11:56.673 "trtype": "tcp" 00:11:56.673 } 00:11:56.673 } 00:11:56.673 Got JSON-RPC error response 00:11:56.673 GoRPCClient: error on JSON-RPC call 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:56.673 [2024-05-15 13:30:09.689901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.673 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:11:56.931 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.931 13:30:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:56.931 { 00:11:56.931 "subsystems": [ 00:11:56.931 { 00:11:56.931 "subsystem": "keyring", 00:11:56.931 "config": [] 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "subsystem": "iobuf", 00:11:56.931 "config": [ 00:11:56.931 { 00:11:56.931 "method": "iobuf_set_options", 00:11:56.931 "params": { 00:11:56.931 "large_bufsize": 135168, 00:11:56.931 "large_pool_count": 1024, 00:11:56.931 "small_bufsize": 8192, 00:11:56.931 "small_pool_count": 8192 00:11:56.931 } 00:11:56.931 } 00:11:56.931 ] 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "subsystem": "sock", 00:11:56.931 "config": [ 00:11:56.931 { 00:11:56.931 "method": "sock_impl_set_options", 00:11:56.931 "params": { 00:11:56.931 "enable_ktls": false, 00:11:56.931 "enable_placement_id": 0, 00:11:56.931 "enable_quickack": false, 00:11:56.931 "enable_recv_pipe": true, 00:11:56.931 "enable_zerocopy_send_client": false, 00:11:56.931 "enable_zerocopy_send_server": true, 00:11:56.931 "impl_name": "posix", 00:11:56.931 "recv_buf_size": 2097152, 00:11:56.931 "send_buf_size": 2097152, 00:11:56.931 "tls_version": 0, 00:11:56.931 "zerocopy_threshold": 0 00:11:56.931 } 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "method": "sock_impl_set_options", 00:11:56.931 "params": { 00:11:56.931 "enable_ktls": false, 00:11:56.931 "enable_placement_id": 0, 00:11:56.931 "enable_quickack": false, 00:11:56.931 "enable_recv_pipe": true, 00:11:56.931 "enable_zerocopy_send_client": false, 00:11:56.931 "enable_zerocopy_send_server": true, 00:11:56.931 "impl_name": "ssl", 00:11:56.931 "recv_buf_size": 4096, 00:11:56.931 "send_buf_size": 4096, 00:11:56.931 "tls_version": 0, 00:11:56.931 "zerocopy_threshold": 0 00:11:56.931 } 00:11:56.931 } 00:11:56.931 ] 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "subsystem": "vmd", 00:11:56.931 "config": [] 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "subsystem": "accel", 00:11:56.931 "config": [ 00:11:56.931 { 00:11:56.931 "method": "accel_set_options", 00:11:56.931 "params": { 00:11:56.931 "buf_count": 2048, 00:11:56.931 "large_cache_size": 16, 00:11:56.931 "sequence_count": 2048, 00:11:56.931 "small_cache_size": 128, 00:11:56.931 "task_count": 2048 00:11:56.931 } 00:11:56.931 } 00:11:56.931 ] 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "subsystem": "bdev", 00:11:56.931 "config": [ 00:11:56.931 { 00:11:56.931 "method": "bdev_set_options", 00:11:56.931 "params": { 00:11:56.931 "bdev_auto_examine": true, 00:11:56.931 "bdev_io_cache_size": 256, 00:11:56.931 "bdev_io_pool_size": 65535, 00:11:56.931 "iobuf_large_cache_size": 16, 00:11:56.931 "iobuf_small_cache_size": 128 00:11:56.931 } 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "method": "bdev_raid_set_options", 00:11:56.931 "params": { 00:11:56.931 "process_window_size_kb": 1024 00:11:56.931 } 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "method": "bdev_iscsi_set_options", 00:11:56.931 "params": { 00:11:56.931 "timeout_sec": 30 00:11:56.931 } 00:11:56.931 }, 00:11:56.931 { 00:11:56.931 "method": "bdev_nvme_set_options", 00:11:56.931 "params": { 00:11:56.931 "action_on_timeout": "none", 00:11:56.931 "allow_accel_sequence": false, 00:11:56.931 "arbitration_burst": 0, 00:11:56.931 "bdev_retry_count": 3, 00:11:56.931 "ctrlr_loss_timeout_sec": 0, 00:11:56.931 "delay_cmd_submit": true, 00:11:56.931 "dhchap_dhgroups": [ 00:11:56.931 "null", 00:11:56.931 "ffdhe2048", 00:11:56.931 "ffdhe3072", 00:11:56.931 "ffdhe4096", 00:11:56.931 "ffdhe6144", 00:11:56.931 "ffdhe8192" 00:11:56.931 ], 00:11:56.931 "dhchap_digests": [ 00:11:56.931 "sha256", 00:11:56.931 "sha384", 00:11:56.931 "sha512" 00:11:56.931 ], 00:11:56.931 "disable_auto_failback": false, 00:11:56.931 "fast_io_fail_timeout_sec": 0, 00:11:56.931 "generate_uuids": false, 00:11:56.931 "high_priority_weight": 0, 00:11:56.931 "io_path_stat": false, 00:11:56.931 "io_queue_requests": 0, 00:11:56.931 "keep_alive_timeout_ms": 10000, 00:11:56.931 "low_priority_weight": 0, 00:11:56.931 "medium_priority_weight": 0, 00:11:56.931 "nvme_adminq_poll_period_us": 10000, 00:11:56.931 "nvme_error_stat": false, 00:11:56.931 "nvme_ioq_poll_period_us": 0, 00:11:56.931 "rdma_cm_event_timeout_ms": 0, 00:11:56.931 "rdma_max_cq_size": 0, 00:11:56.931 "rdma_srq_size": 0, 00:11:56.931 "reconnect_delay_sec": 0, 00:11:56.931 "timeout_admin_us": 0, 00:11:56.931 "timeout_us": 0, 00:11:56.931 "transport_ack_timeout": 0, 00:11:56.931 "transport_retry_count": 4, 00:11:56.932 "transport_tos": 0 00:11:56.932 } 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "method": "bdev_nvme_set_hotplug", 00:11:56.932 "params": { 00:11:56.932 "enable": false, 00:11:56.932 "period_us": 100000 00:11:56.932 } 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "method": "bdev_wait_for_examine" 00:11:56.932 } 00:11:56.932 ] 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "scsi", 00:11:56.932 "config": null 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "scheduler", 00:11:56.932 "config": [ 00:11:56.932 { 00:11:56.932 "method": "framework_set_scheduler", 00:11:56.932 "params": { 00:11:56.932 "name": "static" 00:11:56.932 } 00:11:56.932 } 00:11:56.932 ] 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "vhost_scsi", 00:11:56.932 "config": [] 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "vhost_blk", 00:11:56.932 "config": [] 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "ublk", 00:11:56.932 "config": [] 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "nbd", 00:11:56.932 "config": [] 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "nvmf", 00:11:56.932 "config": [ 00:11:56.932 { 00:11:56.932 "method": "nvmf_set_config", 00:11:56.932 "params": { 00:11:56.932 "admin_cmd_passthru": { 00:11:56.932 "identify_ctrlr": false 00:11:56.932 }, 00:11:56.932 "discovery_filter": "match_any" 00:11:56.932 } 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "method": "nvmf_set_max_subsystems", 00:11:56.932 "params": { 00:11:56.932 "max_subsystems": 1024 00:11:56.932 } 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "method": "nvmf_set_crdt", 00:11:56.932 "params": { 00:11:56.932 "crdt1": 0, 00:11:56.932 "crdt2": 0, 00:11:56.932 "crdt3": 0 00:11:56.932 } 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "method": "nvmf_create_transport", 00:11:56.932 "params": { 00:11:56.932 "abort_timeout_sec": 1, 00:11:56.932 "ack_timeout": 0, 00:11:56.932 "buf_cache_size": 4294967295, 00:11:56.932 "c2h_success": true, 00:11:56.932 "data_wr_pool_size": 0, 00:11:56.932 "dif_insert_or_strip": false, 00:11:56.932 "in_capsule_data_size": 4096, 00:11:56.932 "io_unit_size": 131072, 00:11:56.932 "max_aq_depth": 128, 00:11:56.932 "max_io_qpairs_per_ctrlr": 127, 00:11:56.932 "max_io_size": 131072, 00:11:56.932 "max_queue_depth": 128, 00:11:56.932 "num_shared_buffers": 511, 00:11:56.932 "sock_priority": 0, 00:11:56.932 "trtype": "TCP", 00:11:56.932 "zcopy": false 00:11:56.932 } 00:11:56.932 } 00:11:56.932 ] 00:11:56.932 }, 00:11:56.932 { 00:11:56.932 "subsystem": "iscsi", 00:11:56.932 "config": [ 00:11:56.932 { 00:11:56.932 "method": "iscsi_set_options", 00:11:56.932 "params": { 00:11:56.932 "allow_duplicated_isid": false, 00:11:56.932 "chap_group": 0, 00:11:56.932 "data_out_pool_size": 2048, 00:11:56.932 "default_time2retain": 20, 00:11:56.932 "default_time2wait": 2, 00:11:56.932 "disable_chap": false, 00:11:56.932 "error_recovery_level": 0, 00:11:56.932 "first_burst_length": 8192, 00:11:56.932 "immediate_data": true, 00:11:56.932 "immediate_data_pool_size": 16384, 00:11:56.932 "max_connections_per_session": 2, 00:11:56.932 "max_large_datain_per_connection": 64, 00:11:56.932 "max_queue_depth": 64, 00:11:56.932 "max_r2t_per_connection": 4, 00:11:56.932 "max_sessions": 128, 00:11:56.932 "mutual_chap": false, 00:11:56.932 "node_base": "iqn.2016-06.io.spdk", 00:11:56.932 "nop_in_interval": 30, 00:11:56.932 "nop_timeout": 60, 00:11:56.932 "pdu_pool_size": 36864, 00:11:56.932 "require_chap": false 00:11:56.932 } 00:11:56.932 } 00:11:56.932 ] 00:11:56.932 } 00:11:56.932 ] 00:11:56.932 } 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74270 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74270 ']' 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74270 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74270 00:11:56.932 killing process with pid 74270 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74270' 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74270 00:11:56.932 13:30:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74270 00:11:57.190 13:30:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:11:57.190 13:30:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74308 00:11:57.190 13:30:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74308 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74308 ']' 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74308 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74308 00:12:02.452 killing process with pid 74308 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74308' 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74308 00:12:02.452 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74308 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:12:02.711 00:12:02.711 real 0m7.056s 00:12:02.711 user 0m6.812s 00:12:02.711 sys 0m0.647s 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.711 ************************************ 00:12:02.711 END TEST skip_rpc_with_json 00:12:02.711 ************************************ 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:12:02.711 13:30:15 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:12:02.711 13:30:15 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:02.711 13:30:15 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.711 13:30:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.711 ************************************ 00:12:02.711 START TEST skip_rpc_with_delay 00:12:02.711 ************************************ 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:12:02.711 [2024-05-15 13:30:15.783559] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:12:02.711 [2024-05-15 13:30:15.783745] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:02.711 00:12:02.711 real 0m0.075s 00:12:02.711 user 0m0.049s 00:12:02.711 sys 0m0.025s 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.711 ************************************ 00:12:02.711 END TEST skip_rpc_with_delay 00:12:02.711 ************************************ 00:12:02.711 13:30:15 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:12:02.969 13:30:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:12:02.969 13:30:15 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:12:02.969 13:30:15 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:12:02.969 13:30:15 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:02.969 13:30:15 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.969 13:30:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.969 ************************************ 00:12:02.969 START TEST exit_on_failed_rpc_init 00:12:02.969 ************************************ 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74419 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74419 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 74419 ']' 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.969 13:30:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:02.969 [2024-05-15 13:30:15.931459] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:02.969 [2024-05-15 13:30:15.931596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74419 ] 00:12:02.970 [2024-05-15 13:30:16.054527] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:03.228 [2024-05-15 13:30:16.073537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.228 [2024-05-15 13:30:16.166184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:12:04.239 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:12:04.239 [2024-05-15 13:30:17.072720] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:04.239 [2024-05-15 13:30:17.072807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74449 ] 00:12:04.239 [2024-05-15 13:30:17.190146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:04.239 [2024-05-15 13:30:17.207560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.239 [2024-05-15 13:30:17.287861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.239 [2024-05-15 13:30:17.287955] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:12:04.239 [2024-05-15 13:30:17.287970] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:12:04.239 [2024-05-15 13:30:17.287979] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74419 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 74419 ']' 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 74419 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74419 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:04.499 killing process with pid 74419 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74419' 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 74419 00:12:04.499 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 74419 00:12:04.757 00:12:04.757 real 0m1.926s 00:12:04.757 user 0m2.278s 00:12:04.757 sys 0m0.449s 00:12:04.757 ************************************ 00:12:04.757 END TEST exit_on_failed_rpc_init 00:12:04.757 ************************************ 00:12:04.757 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:04.757 13:30:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:12:04.757 13:30:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:12:04.757 00:12:04.757 real 0m14.768s 00:12:04.757 user 0m14.238s 00:12:04.757 sys 0m1.612s 00:12:04.757 13:30:17 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:04.757 13:30:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:04.757 ************************************ 00:12:04.757 END TEST skip_rpc 00:12:04.757 ************************************ 00:12:05.016 13:30:17 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:05.016 13:30:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:05.016 13:30:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:05.016 13:30:17 -- common/autotest_common.sh@10 -- # set +x 00:12:05.016 ************************************ 00:12:05.016 START TEST rpc_client 00:12:05.016 ************************************ 00:12:05.016 13:30:17 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:12:05.016 * Looking for test storage... 00:12:05.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:12:05.016 13:30:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:12:05.016 OK 00:12:05.016 13:30:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:12:05.016 00:12:05.016 real 0m0.102s 00:12:05.016 user 0m0.049s 00:12:05.016 sys 0m0.058s 00:12:05.016 13:30:17 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:05.016 ************************************ 00:12:05.016 END TEST rpc_client 00:12:05.016 ************************************ 00:12:05.016 13:30:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:12:05.016 13:30:18 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:05.016 13:30:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:05.016 13:30:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:05.016 13:30:18 -- common/autotest_common.sh@10 -- # set +x 00:12:05.016 ************************************ 00:12:05.016 START TEST json_config 00:12:05.016 ************************************ 00:12:05.016 13:30:18 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:12:05.016 13:30:18 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@7 -- # uname -s 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:05.016 13:30:18 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:05.016 13:30:18 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:05.016 13:30:18 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:05.016 13:30:18 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.016 13:30:18 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.016 13:30:18 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.016 13:30:18 json_config -- paths/export.sh@5 -- # export PATH 00:12:05.016 13:30:18 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:05.016 13:30:18 json_config -- nvmf/common.sh@47 -- # : 0 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:05.017 13:30:18 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:12:05.017 13:30:18 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:05.277 INFO: JSON configuration test init 00:12:05.277 13:30:18 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:12:05.277 13:30:18 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:12:05.277 13:30:18 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:05.277 13:30:18 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:05.277 13:30:18 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:12:05.277 13:30:18 json_config -- json_config/common.sh@9 -- # local app=target 00:12:05.277 13:30:18 json_config -- json_config/common.sh@10 -- # shift 00:12:05.277 13:30:18 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:05.277 13:30:18 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:05.277 13:30:18 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:05.277 13:30:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:05.277 13:30:18 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:05.277 13:30:18 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=74567 00:12:05.277 13:30:18 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:12:05.277 13:30:18 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:05.277 Waiting for target to run... 00:12:05.277 13:30:18 json_config -- json_config/common.sh@25 -- # waitforlisten 74567 /var/tmp/spdk_tgt.sock 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@827 -- # '[' -z 74567 ']' 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:05.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:05.277 13:30:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:05.277 [2024-05-15 13:30:18.183335] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:05.277 [2024-05-15 13:30:18.183951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74567 ] 00:12:05.844 [2024-05-15 13:30:18.645781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:05.844 [2024-05-15 13:30:18.668278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.844 [2024-05-15 13:30:18.745816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.101 13:30:19 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:06.101 00:12:06.101 13:30:19 json_config -- common/autotest_common.sh@860 -- # return 0 00:12:06.101 13:30:19 json_config -- json_config/common.sh@26 -- # echo '' 00:12:06.101 13:30:19 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:12:06.101 13:30:19 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:12:06.358 13:30:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:06.358 13:30:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:06.358 13:30:19 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:12:06.358 13:30:19 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:12:06.358 13:30:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.358 13:30:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:06.358 13:30:19 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:12:06.358 13:30:19 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:12:06.358 13:30:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:12:06.924 13:30:19 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:12:06.924 13:30:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:12:06.924 13:30:19 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:06.924 13:30:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:06.924 13:30:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:12:06.924 13:30:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:12:06.924 13:30:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:12:06.924 13:30:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:12:06.924 13:30:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:12:06.924 13:30:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:12:06.924 13:30:20 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:12:06.924 13:30:20 json_config -- json_config/json_config.sh@48 -- # local get_types 00:12:06.924 13:30:20 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:12:06.924 13:30:20 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:12:06.924 13:30:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.924 13:30:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@55 -- # return 0 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:12:07.180 13:30:20 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:07.180 13:30:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:12:07.180 13:30:20 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:07.181 13:30:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:12:07.470 MallocForNvmf0 00:12:07.470 13:30:20 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:07.470 13:30:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:12:07.470 MallocForNvmf1 00:12:07.470 13:30:20 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:12:07.470 13:30:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:12:07.728 [2024-05-15 13:30:20.798569] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.728 13:30:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.728 13:30:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.296 13:30:21 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:08.296 13:30:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:12:08.296 13:30:21 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:08.296 13:30:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:12:08.562 13:30:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:08.562 13:30:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:12:09.128 [2024-05-15 13:30:21.939024] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:09.128 [2024-05-15 13:30:21.939373] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:09.128 13:30:21 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:12:09.128 13:30:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.128 13:30:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:09.128 13:30:21 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:12:09.128 13:30:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.128 13:30:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:09.128 13:30:22 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:12:09.128 13:30:22 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:09.128 13:30:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:12:09.386 MallocBdevForConfigChangeCheck 00:12:09.386 13:30:22 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:12:09.386 13:30:22 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.386 13:30:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:09.386 13:30:22 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:12:09.386 13:30:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:09.952 INFO: shutting down applications... 00:12:09.952 13:30:22 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:12:09.952 13:30:22 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:12:09.952 13:30:22 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:12:09.952 13:30:22 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:12:09.952 13:30:22 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:12:10.209 Calling clear_iscsi_subsystem 00:12:10.209 Calling clear_nvmf_subsystem 00:12:10.209 Calling clear_nbd_subsystem 00:12:10.209 Calling clear_ublk_subsystem 00:12:10.209 Calling clear_vhost_blk_subsystem 00:12:10.209 Calling clear_vhost_scsi_subsystem 00:12:10.209 Calling clear_bdev_subsystem 00:12:10.209 13:30:23 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:12:10.209 13:30:23 json_config -- json_config/json_config.sh@343 -- # count=100 00:12:10.209 13:30:23 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:12:10.209 13:30:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:10.209 13:30:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:12:10.209 13:30:23 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:12:10.468 13:30:23 json_config -- json_config/json_config.sh@345 -- # break 00:12:10.468 13:30:23 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:12:10.468 13:30:23 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:12:10.468 13:30:23 json_config -- json_config/common.sh@31 -- # local app=target 00:12:10.468 13:30:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:10.468 13:30:23 json_config -- json_config/common.sh@35 -- # [[ -n 74567 ]] 00:12:10.468 13:30:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 74567 00:12:10.468 [2024-05-15 13:30:23.479109] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:10.468 13:30:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:10.468 13:30:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:10.468 13:30:23 json_config -- json_config/common.sh@41 -- # kill -0 74567 00:12:10.468 13:30:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:12:11.034 13:30:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:12:11.034 13:30:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:11.034 13:30:23 json_config -- json_config/common.sh@41 -- # kill -0 74567 00:12:11.034 13:30:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:11.034 13:30:23 json_config -- json_config/common.sh@43 -- # break 00:12:11.034 13:30:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:11.034 13:30:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:11.034 SPDK target shutdown done 00:12:11.034 INFO: relaunching applications... 00:12:11.034 13:30:23 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:12:11.034 13:30:23 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:11.034 13:30:23 json_config -- json_config/common.sh@9 -- # local app=target 00:12:11.034 13:30:23 json_config -- json_config/common.sh@10 -- # shift 00:12:11.034 13:30:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:11.034 13:30:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:11.034 13:30:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:12:11.034 13:30:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:11.034 13:30:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:11.034 13:30:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=74848 00:12:11.034 13:30:23 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:11.034 13:30:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:11.034 Waiting for target to run... 00:12:11.034 13:30:23 json_config -- json_config/common.sh@25 -- # waitforlisten 74848 /var/tmp/spdk_tgt.sock 00:12:11.034 13:30:23 json_config -- common/autotest_common.sh@827 -- # '[' -z 74848 ']' 00:12:11.035 13:30:23 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:11.035 13:30:23 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:11.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:11.035 13:30:23 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:11.035 13:30:23 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:11.035 13:30:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:11.035 [2024-05-15 13:30:24.057026] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:11.035 [2024-05-15 13:30:24.057856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74848 ] 00:12:11.601 [2024-05-15 13:30:24.486956] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:11.601 [2024-05-15 13:30:24.507785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.601 [2024-05-15 13:30:24.575067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.859 [2024-05-15 13:30:24.877805] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:11.859 [2024-05-15 13:30:24.909706] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:11.859 [2024-05-15 13:30:24.910000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:12:12.116 13:30:25 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:12.116 00:12:12.116 13:30:25 json_config -- common/autotest_common.sh@860 -- # return 0 00:12:12.116 13:30:25 json_config -- json_config/common.sh@26 -- # echo '' 00:12:12.116 13:30:25 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:12:12.116 INFO: Checking if target configuration is the same... 00:12:12.116 13:30:25 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:12:12.116 13:30:25 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:12.116 13:30:25 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:12:12.116 13:30:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:12.116 + '[' 2 -ne 2 ']' 00:12:12.116 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:12.116 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:12.116 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:12.116 +++ basename /dev/fd/62 00:12:12.116 ++ mktemp /tmp/62.XXX 00:12:12.116 + tmp_file_1=/tmp/62.Ccu 00:12:12.116 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:12.116 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:12.116 + tmp_file_2=/tmp/spdk_tgt_config.json.ln3 00:12:12.116 + ret=0 00:12:12.116 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:12.690 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:12.690 + diff -u /tmp/62.Ccu /tmp/spdk_tgt_config.json.ln3 00:12:12.690 INFO: JSON config files are the same 00:12:12.690 + echo 'INFO: JSON config files are the same' 00:12:12.690 + rm /tmp/62.Ccu /tmp/spdk_tgt_config.json.ln3 00:12:12.690 + exit 0 00:12:12.690 13:30:25 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:12:12.690 INFO: changing configuration and checking if this can be detected... 00:12:12.690 13:30:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:12:12.690 13:30:25 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:12.690 13:30:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:12:12.949 13:30:25 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:12.949 13:30:25 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:12:12.949 13:30:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:12:12.949 + '[' 2 -ne 2 ']' 00:12:12.949 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:12:12.949 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:12:12.949 + rootdir=/home/vagrant/spdk_repo/spdk 00:12:12.949 +++ basename /dev/fd/62 00:12:12.949 ++ mktemp /tmp/62.XXX 00:12:12.949 + tmp_file_1=/tmp/62.DTf 00:12:12.949 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:12.949 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:12:12.949 + tmp_file_2=/tmp/spdk_tgt_config.json.8sE 00:12:12.949 + ret=0 00:12:12.949 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:13.517 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:12:13.517 + diff -u /tmp/62.DTf /tmp/spdk_tgt_config.json.8sE 00:12:13.517 + ret=1 00:12:13.517 + echo '=== Start of file: /tmp/62.DTf ===' 00:12:13.517 + cat /tmp/62.DTf 00:12:13.517 + echo '=== End of file: /tmp/62.DTf ===' 00:12:13.517 + echo '' 00:12:13.517 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8sE ===' 00:12:13.517 + cat /tmp/spdk_tgt_config.json.8sE 00:12:13.517 + echo '=== End of file: /tmp/spdk_tgt_config.json.8sE ===' 00:12:13.517 + echo '' 00:12:13.517 + rm /tmp/62.DTf /tmp/spdk_tgt_config.json.8sE 00:12:13.517 + exit 1 00:12:13.517 INFO: configuration change detected. 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@317 -- # [[ -n 74848 ]] 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@193 -- # uname -s 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.517 13:30:26 json_config -- json_config/json_config.sh@323 -- # killprocess 74848 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@946 -- # '[' -z 74848 ']' 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@950 -- # kill -0 74848 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@951 -- # uname 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74848 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:13.517 killing process with pid 74848 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74848' 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@965 -- # kill 74848 00:12:13.517 [2024-05-15 13:30:26.532809] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:13.517 13:30:26 json_config -- common/autotest_common.sh@970 -- # wait 74848 00:12:13.775 13:30:26 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:12:13.775 13:30:26 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:12:13.775 13:30:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:13.775 13:30:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.775 13:30:26 json_config -- json_config/json_config.sh@328 -- # return 0 00:12:13.775 INFO: Success 00:12:13.775 13:30:26 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:12:13.775 00:12:13.775 real 0m8.778s 00:12:13.775 user 0m12.631s 00:12:13.775 sys 0m2.000s 00:12:13.775 13:30:26 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:13.775 ************************************ 00:12:13.775 END TEST json_config 00:12:13.775 13:30:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:12:13.775 ************************************ 00:12:13.775 13:30:26 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:13.775 13:30:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:13.775 13:30:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:13.775 13:30:26 -- common/autotest_common.sh@10 -- # set +x 00:12:13.775 ************************************ 00:12:13.775 START TEST json_config_extra_key 00:12:13.775 ************************************ 00:12:13.775 13:30:26 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.034 13:30:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.034 13:30:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.034 13:30:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.034 13:30:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.034 13:30:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.034 13:30:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.034 13:30:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:12:14.034 13:30:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:14.034 13:30:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:12:14.034 INFO: launching applications... 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:12:14.034 13:30:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=75024 00:12:14.034 Waiting for target to run... 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 75024 /var/tmp/spdk_tgt.sock 00:12:14.034 13:30:26 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 75024 ']' 00:12:14.034 13:30:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:12:14.034 13:30:26 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:12:14.034 13:30:26 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:14.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:12:14.034 13:30:26 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:12:14.034 13:30:26 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:14.034 13:30:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:14.034 [2024-05-15 13:30:27.003457] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:14.034 [2024-05-15 13:30:27.003590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75024 ] 00:12:14.601 [2024-05-15 13:30:27.453907] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:14.601 [2024-05-15 13:30:27.474766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.601 [2024-05-15 13:30:27.544972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.166 13:30:27 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:15.166 13:30:27 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:12:15.166 00:12:15.166 INFO: shutting down applications... 00:12:15.166 13:30:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:12:15.166 13:30:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 75024 ]] 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 75024 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75024 00:12:15.166 13:30:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:12:15.425 13:30:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:12:15.425 13:30:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:12:15.425 13:30:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75024 00:12:15.425 13:30:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:12:15.425 13:30:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:12:15.425 13:30:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:12:15.425 SPDK target shutdown done 00:12:15.425 13:30:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:12:15.425 Success 00:12:15.425 13:30:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:12:15.425 00:12:15.425 real 0m1.644s 00:12:15.425 user 0m1.540s 00:12:15.425 sys 0m0.456s 00:12:15.425 13:30:28 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:15.425 13:30:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:12:15.425 ************************************ 00:12:15.425 END TEST json_config_extra_key 00:12:15.425 ************************************ 00:12:15.684 13:30:28 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:15.684 13:30:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:15.684 13:30:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:15.684 13:30:28 -- common/autotest_common.sh@10 -- # set +x 00:12:15.684 ************************************ 00:12:15.684 START TEST alias_rpc 00:12:15.684 ************************************ 00:12:15.684 13:30:28 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:12:15.684 * Looking for test storage... 00:12:15.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:12:15.684 13:30:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:12:15.684 13:30:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=75095 00:12:15.684 13:30:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:15.684 13:30:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 75095 00:12:15.684 13:30:28 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 75095 ']' 00:12:15.684 13:30:28 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.684 13:30:28 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.684 13:30:28 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.684 13:30:28 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.684 13:30:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.684 [2024-05-15 13:30:28.703272] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:15.684 [2024-05-15 13:30:28.703404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75095 ] 00:12:15.946 [2024-05-15 13:30:28.827177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:15.946 [2024-05-15 13:30:28.846939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.946 [2024-05-15 13:30:28.954651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.880 13:30:29 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:16.880 13:30:29 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:16.880 13:30:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:12:17.138 13:30:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 75095 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 75095 ']' 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 75095 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75095 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75095' 00:12:17.138 killing process with pid 75095 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@965 -- # kill 75095 00:12:17.138 13:30:30 alias_rpc -- common/autotest_common.sh@970 -- # wait 75095 00:12:17.396 00:12:17.396 real 0m1.865s 00:12:17.396 user 0m2.125s 00:12:17.396 sys 0m0.483s 00:12:17.396 13:30:30 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:17.396 ************************************ 00:12:17.396 END TEST alias_rpc 00:12:17.396 ************************************ 00:12:17.396 13:30:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.396 13:30:30 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:12:17.396 13:30:30 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:17.397 13:30:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:17.397 13:30:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:17.397 13:30:30 -- common/autotest_common.sh@10 -- # set +x 00:12:17.397 ************************************ 00:12:17.397 START TEST dpdk_mem_utility 00:12:17.397 ************************************ 00:12:17.397 13:30:30 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:12:17.655 * Looking for test storage... 00:12:17.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:12:17.655 13:30:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:17.655 13:30:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75187 00:12:17.655 13:30:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:17.655 13:30:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75187 00:12:17.655 13:30:30 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 75187 ']' 00:12:17.655 13:30:30 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.655 13:30:30 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:17.655 13:30:30 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.655 13:30:30 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:17.655 13:30:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:17.655 [2024-05-15 13:30:30.595419] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:17.655 [2024-05-15 13:30:30.595523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75187 ] 00:12:17.655 [2024-05-15 13:30:30.713220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:17.655 [2024-05-15 13:30:30.729226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.913 [2024-05-15 13:30:30.826118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.848 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:18.848 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:12:18.848 13:30:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:12:18.848 13:30:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:12:18.848 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.848 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:18.849 { 00:12:18.849 "filename": "/tmp/spdk_mem_dump.txt" 00:12:18.849 } 00:12:18.849 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.849 13:30:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:12:18.849 DPDK memory size 814.000000 MiB in 1 heap(s) 00:12:18.849 1 heaps totaling size 814.000000 MiB 00:12:18.849 size: 814.000000 MiB heap id: 0 00:12:18.849 end heaps---------- 00:12:18.849 8 mempools totaling size 598.116089 MiB 00:12:18.849 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:12:18.849 size: 158.602051 MiB name: PDU_data_out_Pool 00:12:18.849 size: 84.521057 MiB name: bdev_io_75187 00:12:18.849 size: 51.011292 MiB name: evtpool_75187 00:12:18.849 size: 50.003479 MiB name: msgpool_75187 00:12:18.849 size: 21.763794 MiB name: PDU_Pool 00:12:18.849 size: 19.513306 MiB name: SCSI_TASK_Pool 00:12:18.849 size: 0.026123 MiB name: Session_Pool 00:12:18.849 end mempools------- 00:12:18.849 6 memzones totaling size 4.142822 MiB 00:12:18.849 size: 1.000366 MiB name: RG_ring_0_75187 00:12:18.849 size: 1.000366 MiB name: RG_ring_1_75187 00:12:18.849 size: 1.000366 MiB name: RG_ring_4_75187 00:12:18.849 size: 1.000366 MiB name: RG_ring_5_75187 00:12:18.849 size: 0.125366 MiB name: RG_ring_2_75187 00:12:18.849 size: 0.015991 MiB name: RG_ring_3_75187 00:12:18.849 end memzones------- 00:12:18.849 13:30:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:12:18.849 heap id: 0 total size: 814.000000 MiB number of busy elements: 213 number of free elements: 15 00:12:18.849 list of free elements. size: 12.487854 MiB 00:12:18.849 element at address: 0x200000400000 with size: 1.999512 MiB 00:12:18.849 element at address: 0x200018e00000 with size: 0.999878 MiB 00:12:18.849 element at address: 0x200019000000 with size: 0.999878 MiB 00:12:18.849 element at address: 0x200003e00000 with size: 0.996277 MiB 00:12:18.849 element at address: 0x200031c00000 with size: 0.994446 MiB 00:12:18.849 element at address: 0x200013800000 with size: 0.978699 MiB 00:12:18.849 element at address: 0x200007000000 with size: 0.959839 MiB 00:12:18.849 element at address: 0x200019200000 with size: 0.936584 MiB 00:12:18.849 element at address: 0x200000200000 with size: 0.837036 MiB 00:12:18.849 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:12:18.849 element at address: 0x20000b200000 with size: 0.489990 MiB 00:12:18.849 element at address: 0x200000800000 with size: 0.487061 MiB 00:12:18.849 element at address: 0x200019400000 with size: 0.485657 MiB 00:12:18.849 element at address: 0x200027e00000 with size: 0.398499 MiB 00:12:18.849 element at address: 0x200003a00000 with size: 0.351685 MiB 00:12:18.849 list of standard malloc elements. size: 199.249573 MiB 00:12:18.849 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:12:18.849 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:12:18.849 element at address: 0x200018efff80 with size: 1.000122 MiB 00:12:18.849 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:12:18.849 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:12:18.849 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:12:18.849 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:12:18.849 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:12:18.849 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:12:18.849 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003adb300 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003adb500 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003affa80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003affb40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:12:18.849 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:12:18.850 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e66040 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e66100 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6cd00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:12:18.850 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:12:18.850 list of memzone associated elements. size: 602.262573 MiB 00:12:18.850 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:12:18.850 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:12:18.850 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:12:18.850 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:12:18.850 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:12:18.850 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75187_0 00:12:18.850 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:12:18.850 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75187_0 00:12:18.850 element at address: 0x200003fff380 with size: 48.003052 MiB 00:12:18.850 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75187_0 00:12:18.850 element at address: 0x2000195be940 with size: 20.255554 MiB 00:12:18.850 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:12:18.850 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:12:18.850 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:12:18.850 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:12:18.850 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75187 00:12:18.850 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:12:18.850 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75187 00:12:18.850 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:12:18.850 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75187 00:12:18.850 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:12:18.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:12:18.850 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:12:18.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:12:18.850 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:12:18.850 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:12:18.850 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:12:18.850 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:12:18.850 element at address: 0x200003eff180 with size: 1.000488 MiB 00:12:18.850 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75187 00:12:18.850 element at address: 0x200003affc00 with size: 1.000488 MiB 00:12:18.850 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75187 00:12:18.850 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:12:18.850 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75187 00:12:18.850 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:12:18.850 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75187 00:12:18.850 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:12:18.850 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75187 00:12:18.850 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:12:18.850 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:12:18.850 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:12:18.850 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:12:18.850 element at address: 0x20001947c540 with size: 0.250488 MiB 00:12:18.850 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:12:18.850 element at address: 0x200003adf880 with size: 0.125488 MiB 00:12:18.850 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75187 00:12:18.850 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:12:18.850 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:12:18.851 element at address: 0x200027e661c0 with size: 0.023743 MiB 00:12:18.851 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:12:18.851 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:12:18.851 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75187 00:12:18.851 element at address: 0x200027e6c300 with size: 0.002441 MiB 00:12:18.851 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:12:18.851 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:12:18.851 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75187 00:12:18.851 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:12:18.851 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75187 00:12:18.851 element at address: 0x200027e6cdc0 with size: 0.000305 MiB 00:12:18.851 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:12:18.851 13:30:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:12:18.851 13:30:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75187 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 75187 ']' 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 75187 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75187 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:18.851 killing process with pid 75187 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75187' 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 75187 00:12:18.851 13:30:31 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 75187 00:12:19.109 00:12:19.109 real 0m1.679s 00:12:19.109 user 0m1.825s 00:12:19.109 sys 0m0.428s 00:12:19.109 13:30:32 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.109 13:30:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:12:19.109 ************************************ 00:12:19.109 END TEST dpdk_mem_utility 00:12:19.109 ************************************ 00:12:19.109 13:30:32 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:19.109 13:30:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:19.109 13:30:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.109 13:30:32 -- common/autotest_common.sh@10 -- # set +x 00:12:19.109 ************************************ 00:12:19.109 START TEST event 00:12:19.109 ************************************ 00:12:19.109 13:30:32 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:12:19.367 * Looking for test storage... 00:12:19.367 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:19.367 13:30:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:19.367 13:30:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:12:19.367 13:30:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:19.367 13:30:32 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:12:19.367 13:30:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.367 13:30:32 event -- common/autotest_common.sh@10 -- # set +x 00:12:19.367 ************************************ 00:12:19.367 START TEST event_perf 00:12:19.367 ************************************ 00:12:19.367 13:30:32 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:12:19.367 Running I/O for 1 seconds...[2024-05-15 13:30:32.299633] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:19.367 [2024-05-15 13:30:32.299766] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75282 ] 00:12:19.367 [2024-05-15 13:30:32.431772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:19.367 [2024-05-15 13:30:32.451832] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:19.625 [2024-05-15 13:30:32.556717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.625 [2024-05-15 13:30:32.556847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:19.625 [2024-05-15 13:30:32.556928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.625 Running I/O for 1 seconds...[2024-05-15 13:30:32.556936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.592 00:12:20.592 lcore 0: 185065 00:12:20.592 lcore 1: 185066 00:12:20.592 lcore 2: 185065 00:12:20.592 lcore 3: 185066 00:12:20.592 done. 00:12:20.592 ************************************ 00:12:20.592 END TEST event_perf 00:12:20.592 ************************************ 00:12:20.592 00:12:20.592 real 0m1.345s 00:12:20.592 user 0m4.146s 00:12:20.592 sys 0m0.076s 00:12:20.592 13:30:33 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:20.592 13:30:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 13:30:33 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:20.592 13:30:33 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:20.592 13:30:33 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:20.592 13:30:33 event -- common/autotest_common.sh@10 -- # set +x 00:12:20.592 ************************************ 00:12:20.592 START TEST event_reactor 00:12:20.592 ************************************ 00:12:20.592 13:30:33 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:12:20.849 [2024-05-15 13:30:33.691158] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:20.849 [2024-05-15 13:30:33.691279] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75315 ] 00:12:20.849 [2024-05-15 13:30:33.807949] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:20.849 [2024-05-15 13:30:33.823950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.849 [2024-05-15 13:30:33.920317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.224 test_start 00:12:22.224 oneshot 00:12:22.224 tick 100 00:12:22.224 tick 100 00:12:22.224 tick 250 00:12:22.224 tick 100 00:12:22.224 tick 100 00:12:22.224 tick 250 00:12:22.224 tick 500 00:12:22.224 tick 100 00:12:22.224 tick 100 00:12:22.224 tick 100 00:12:22.224 tick 250 00:12:22.224 tick 100 00:12:22.224 tick 100 00:12:22.224 test_end 00:12:22.224 00:12:22.224 real 0m1.318s 00:12:22.224 user 0m1.158s 00:12:22.224 sys 0m0.055s 00:12:22.224 ************************************ 00:12:22.224 END TEST event_reactor 00:12:22.224 ************************************ 00:12:22.224 13:30:34 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:22.224 13:30:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:12:22.224 13:30:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:22.224 13:30:35 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:12:22.224 13:30:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:22.224 13:30:35 event -- common/autotest_common.sh@10 -- # set +x 00:12:22.224 ************************************ 00:12:22.224 START TEST event_reactor_perf 00:12:22.224 ************************************ 00:12:22.224 13:30:35 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:12:22.224 [2024-05-15 13:30:35.064689] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:22.224 [2024-05-15 13:30:35.064815] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75351 ] 00:12:22.224 [2024-05-15 13:30:35.185037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:22.224 [2024-05-15 13:30:35.200433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.224 [2024-05-15 13:30:35.280391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.598 test_start 00:12:23.598 test_end 00:12:23.598 Performance: 376662 events per second 00:12:23.598 ************************************ 00:12:23.598 END TEST event_reactor_perf 00:12:23.598 ************************************ 00:12:23.598 00:12:23.598 real 0m1.296s 00:12:23.598 user 0m1.132s 00:12:23.598 sys 0m0.058s 00:12:23.598 13:30:36 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:23.598 13:30:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:12:23.598 13:30:36 event -- event/event.sh@49 -- # uname -s 00:12:23.598 13:30:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:12:23.598 13:30:36 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:23.598 13:30:36 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:23.598 13:30:36 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:23.598 13:30:36 event -- common/autotest_common.sh@10 -- # set +x 00:12:23.598 ************************************ 00:12:23.598 START TEST event_scheduler 00:12:23.598 ************************************ 00:12:23.598 13:30:36 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:12:23.598 * Looking for test storage... 00:12:23.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:12:23.598 13:30:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:12:23.598 13:30:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75412 00:12:23.598 13:30:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:12:23.599 13:30:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:12:23.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.599 13:30:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75412 00:12:23.599 13:30:36 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 75412 ']' 00:12:23.599 13:30:36 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.599 13:30:36 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:23.599 13:30:36 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.599 13:30:36 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:23.599 13:30:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:23.599 [2024-05-15 13:30:36.536111] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:23.599 [2024-05-15 13:30:36.536472] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75412 ] 00:12:23.599 [2024-05-15 13:30:36.658513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:23.599 [2024-05-15 13:30:36.678229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.856 [2024-05-15 13:30:36.781201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.856 [2024-05-15 13:30:36.781304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.856 [2024-05-15 13:30:36.781375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.856 [2024-05-15 13:30:36.781376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.461 13:30:37 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:24.461 13:30:37 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:12:24.461 13:30:37 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:12:24.461 13:30:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.461 13:30:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 POWER: Env isn't set yet! 00:12:24.720 POWER: Attempting to initialise ACPI cpufreq power management... 00:12:24.720 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:24.720 POWER: Cannot set governor of lcore 0 to userspace 00:12:24.720 POWER: Attempting to initialise PSTAT power management... 00:12:24.720 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:24.720 POWER: Cannot set governor of lcore 0 to performance 00:12:24.720 POWER: Attempting to initialise AMD PSTATE power management... 00:12:24.720 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:24.720 POWER: Cannot set governor of lcore 0 to userspace 00:12:24.720 POWER: Attempting to initialise CPPC power management... 00:12:24.720 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:12:24.720 POWER: Cannot set governor of lcore 0 to userspace 00:12:24.720 POWER: Attempting to initialise VM power management... 00:12:24.720 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:12:24.720 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:12:24.720 POWER: Unable to set Power Management Environment for lcore 0 00:12:24.720 [2024-05-15 13:30:37.564029] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:12:24.720 [2024-05-15 13:30:37.564043] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:12:24.720 [2024-05-15 13:30:37.564051] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:12:24.720 13:30:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:12:24.720 13:30:37 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 [2024-05-15 13:30:37.660374] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:12:24.720 13:30:37 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:12:24.720 13:30:37 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:24.720 13:30:37 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 ************************************ 00:12:24.720 START TEST scheduler_create_thread 00:12:24.720 ************************************ 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 2 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 3 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 4 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 5 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 6 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 7 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 8 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 9 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 10 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.720 13:30:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:26.621 13:30:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.621 13:30:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:12:26.621 13:30:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:12:26.621 13:30:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.621 13:30:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:27.555 ************************************ 00:12:27.555 END TEST scheduler_create_thread 00:12:27.555 ************************************ 00:12:27.555 13:30:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.555 00:12:27.555 real 0m2.617s 00:12:27.555 user 0m0.018s 00:12:27.555 sys 0m0.005s 00:12:27.555 13:30:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.555 13:30:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:12:27.555 13:30:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:12:27.555 13:30:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75412 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 75412 ']' 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 75412 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75412 00:12:27.555 killing process with pid 75412 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75412' 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 75412 00:12:27.555 13:30:40 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 75412 00:12:27.813 [2024-05-15 13:30:40.768813] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:12:28.071 00:12:28.071 real 0m4.603s 00:12:28.071 user 0m8.835s 00:12:28.071 sys 0m0.357s 00:12:28.071 13:30:41 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:28.071 ************************************ 00:12:28.071 END TEST event_scheduler 00:12:28.071 ************************************ 00:12:28.071 13:30:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:12:28.071 13:30:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:12:28.071 13:30:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:12:28.071 13:30:41 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:28.071 13:30:41 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.071 13:30:41 event -- common/autotest_common.sh@10 -- # set +x 00:12:28.071 ************************************ 00:12:28.071 START TEST app_repeat 00:12:28.071 ************************************ 00:12:28.071 13:30:41 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75531 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:12:28.071 Process app_repeat pid: 75531 00:12:28.071 spdk_app_start Round 0 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75531' 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:12:28.071 13:30:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75531 /var/tmp/spdk-nbd.sock 00:12:28.071 13:30:41 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75531 ']' 00:12:28.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:28.071 13:30:41 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:28.071 13:30:41 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.071 13:30:41 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:28.071 13:30:41 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.071 13:30:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:28.071 [2024-05-15 13:30:41.082059] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:28.071 [2024-05-15 13:30:41.082142] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75531 ] 00:12:28.329 [2024-05-15 13:30:41.205721] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:28.329 [2024-05-15 13:30:41.222934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:28.329 [2024-05-15 13:30:41.313486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.329 [2024-05-15 13:30:41.313476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.264 13:30:42 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:29.264 13:30:42 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:29.265 13:30:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:29.522 Malloc0 00:12:29.522 13:30:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:29.780 Malloc1 00:12:29.780 13:30:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:29.780 13:30:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:30.037 /dev/nbd0 00:12:30.037 13:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.037 13:30:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:30.037 1+0 records in 00:12:30.037 1+0 records out 00:12:30.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208583 s, 19.6 MB/s 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:30.037 13:30:42 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:30.037 13:30:43 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:30.037 13:30:43 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:30.037 13:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.037 13:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.037 13:30:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:30.296 /dev/nbd1 00:12:30.296 13:30:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:30.296 13:30:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:30.296 1+0 records in 00:12:30.296 1+0 records out 00:12:30.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337791 s, 12.1 MB/s 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:30.296 13:30:43 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:30.296 13:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.296 13:30:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:30.296 13:30:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:30.296 13:30:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.296 13:30:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:30.554 13:30:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:30.554 { 00:12:30.554 "bdev_name": "Malloc0", 00:12:30.554 "nbd_device": "/dev/nbd0" 00:12:30.554 }, 00:12:30.554 { 00:12:30.555 "bdev_name": "Malloc1", 00:12:30.555 "nbd_device": "/dev/nbd1" 00:12:30.555 } 00:12:30.555 ]' 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:30.555 { 00:12:30.555 "bdev_name": "Malloc0", 00:12:30.555 "nbd_device": "/dev/nbd0" 00:12:30.555 }, 00:12:30.555 { 00:12:30.555 "bdev_name": "Malloc1", 00:12:30.555 "nbd_device": "/dev/nbd1" 00:12:30.555 } 00:12:30.555 ]' 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:30.555 /dev/nbd1' 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:30.555 /dev/nbd1' 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:30.555 256+0 records in 00:12:30.555 256+0 records out 00:12:30.555 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00765311 s, 137 MB/s 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.555 13:30:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:30.812 256+0 records in 00:12:30.812 256+0 records out 00:12:30.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241519 s, 43.4 MB/s 00:12:30.812 13:30:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:30.812 13:30:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:30.812 256+0 records in 00:12:30.812 256+0 records out 00:12:30.812 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259415 s, 40.4 MB/s 00:12:30.812 13:30:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:30.812 13:30:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.812 13:30:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:30.812 13:30:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:30.812 13:30:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.813 13:30:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.070 13:30:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.070 13:30:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:31.328 13:30:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:31.586 13:30:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:31.586 13:30:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:31.844 13:30:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:32.101 [2024-05-15 13:30:45.093067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:32.101 [2024-05-15 13:30:45.180264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.101 [2024-05-15 13:30:45.180276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.359 [2024-05-15 13:30:45.233848] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:32.359 [2024-05-15 13:30:45.233909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:34.885 13:30:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:34.885 spdk_app_start Round 1 00:12:34.885 13:30:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:12:34.885 13:30:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75531 /var/tmp/spdk-nbd.sock 00:12:34.885 13:30:47 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75531 ']' 00:12:34.885 13:30:47 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:34.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:34.885 13:30:47 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:34.885 13:30:47 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:34.885 13:30:47 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:34.885 13:30:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:35.143 13:30:48 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:35.143 13:30:48 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:35.143 13:30:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:35.401 Malloc0 00:12:35.401 13:30:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:35.659 Malloc1 00:12:35.659 13:30:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:35.659 13:30:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:35.918 /dev/nbd0 00:12:36.176 13:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.176 13:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:36.176 1+0 records in 00:12:36.176 1+0 records out 00:12:36.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247949 s, 16.5 MB/s 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:36.176 13:30:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:36.176 13:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.176 13:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.176 13:30:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:36.434 /dev/nbd1 00:12:36.434 13:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.434 13:30:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:36.434 1+0 records in 00:12:36.434 1+0 records out 00:12:36.434 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045714 s, 9.0 MB/s 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:36.434 13:30:49 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:36.434 13:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.434 13:30:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.434 13:30:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:36.434 13:30:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.434 13:30:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:36.692 { 00:12:36.692 "bdev_name": "Malloc0", 00:12:36.692 "nbd_device": "/dev/nbd0" 00:12:36.692 }, 00:12:36.692 { 00:12:36.692 "bdev_name": "Malloc1", 00:12:36.692 "nbd_device": "/dev/nbd1" 00:12:36.692 } 00:12:36.692 ]' 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:36.692 { 00:12:36.692 "bdev_name": "Malloc0", 00:12:36.692 "nbd_device": "/dev/nbd0" 00:12:36.692 }, 00:12:36.692 { 00:12:36.692 "bdev_name": "Malloc1", 00:12:36.692 "nbd_device": "/dev/nbd1" 00:12:36.692 } 00:12:36.692 ]' 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:36.692 /dev/nbd1' 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:36.692 /dev/nbd1' 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:36.692 256+0 records in 00:12:36.692 256+0 records out 00:12:36.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00754447 s, 139 MB/s 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:36.692 256+0 records in 00:12:36.692 256+0 records out 00:12:36.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245239 s, 42.8 MB/s 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:36.692 256+0 records in 00:12:36.692 256+0 records out 00:12:36.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269718 s, 38.9 MB/s 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:36.692 13:30:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.693 13:30:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:37.258 13:30:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:37.516 13:30:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:37.516 13:30:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:37.516 13:30:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:37.774 13:30:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:37.774 13:30:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:38.033 13:30:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:38.033 [2024-05-15 13:30:51.091835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:38.292 [2024-05-15 13:30:51.183717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.292 [2024-05-15 13:30:51.183730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.292 [2024-05-15 13:30:51.237939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:38.292 [2024-05-15 13:30:51.238021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:40.887 13:30:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:12:40.887 spdk_app_start Round 2 00:12:40.887 13:30:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:12:40.887 13:30:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75531 /var/tmp/spdk-nbd.sock 00:12:40.887 13:30:53 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75531 ']' 00:12:40.887 13:30:53 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:40.887 13:30:53 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:40.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:40.887 13:30:53 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:40.887 13:30:53 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:40.887 13:30:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:41.145 13:30:54 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:41.145 13:30:54 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:41.145 13:30:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:41.404 Malloc0 00:12:41.404 13:30:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:12:41.970 Malloc1 00:12:41.970 13:30:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.970 13:30:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:42.229 /dev/nbd0 00:12:42.229 13:30:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.229 13:30:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:42.229 1+0 records in 00:12:42.229 1+0 records out 00:12:42.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466715 s, 8.8 MB/s 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:42.229 13:30:55 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:42.229 13:30:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.229 13:30:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.229 13:30:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:12:42.487 /dev/nbd1 00:12:42.487 13:30:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.487 13:30:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:12:42.487 1+0 records in 00:12:42.487 1+0 records out 00:12:42.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365831 s, 11.2 MB/s 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:42.487 13:30:55 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:12:42.487 13:30:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.487 13:30:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.487 13:30:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:42.487 13:30:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.487 13:30:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:42.745 { 00:12:42.745 "bdev_name": "Malloc0", 00:12:42.745 "nbd_device": "/dev/nbd0" 00:12:42.745 }, 00:12:42.745 { 00:12:42.745 "bdev_name": "Malloc1", 00:12:42.745 "nbd_device": "/dev/nbd1" 00:12:42.745 } 00:12:42.745 ]' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:42.745 { 00:12:42.745 "bdev_name": "Malloc0", 00:12:42.745 "nbd_device": "/dev/nbd0" 00:12:42.745 }, 00:12:42.745 { 00:12:42.745 "bdev_name": "Malloc1", 00:12:42.745 "nbd_device": "/dev/nbd1" 00:12:42.745 } 00:12:42.745 ]' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:42.745 /dev/nbd1' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:42.745 /dev/nbd1' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:12:42.745 256+0 records in 00:12:42.745 256+0 records out 00:12:42.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00738933 s, 142 MB/s 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:42.745 256+0 records in 00:12:42.745 256+0 records out 00:12:42.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244093 s, 43.0 MB/s 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:42.745 256+0 records in 00:12:42.745 256+0 records out 00:12:42.745 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265117 s, 39.6 MB/s 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:12:42.745 13:30:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.746 13:30:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:43.003 13:30:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.004 13:30:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.004 13:30:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:43.570 13:30:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:43.829 13:30:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:12:43.829 13:30:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:12:44.087 13:30:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:12:44.347 [2024-05-15 13:30:57.244700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:44.347 [2024-05-15 13:30:57.337194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.347 [2024-05-15 13:30:57.337208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.347 [2024-05-15 13:30:57.390672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:12:44.347 [2024-05-15 13:30:57.390734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:12:47.655 13:31:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75531 /var/tmp/spdk-nbd.sock 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75531 ']' 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:47.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:12:47.655 13:31:00 event.app_repeat -- event/event.sh@39 -- # killprocess 75531 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 75531 ']' 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 75531 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75531 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75531' 00:12:47.655 killing process with pid 75531 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@965 -- # kill 75531 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@970 -- # wait 75531 00:12:47.655 spdk_app_start is called in Round 0. 00:12:47.655 Shutdown signal received, stop current app iteration 00:12:47.655 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:12:47.655 spdk_app_start is called in Round 1. 00:12:47.655 Shutdown signal received, stop current app iteration 00:12:47.655 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:12:47.655 spdk_app_start is called in Round 2. 00:12:47.655 Shutdown signal received, stop current app iteration 00:12:47.655 Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 reinitialization... 00:12:47.655 spdk_app_start is called in Round 3. 00:12:47.655 Shutdown signal received, stop current app iteration 00:12:47.655 13:31:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:12:47.655 13:31:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:12:47.655 00:12:47.655 real 0m19.500s 00:12:47.655 user 0m44.024s 00:12:47.655 sys 0m3.108s 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:47.655 13:31:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:12:47.655 ************************************ 00:12:47.655 END TEST app_repeat 00:12:47.655 ************************************ 00:12:47.655 13:31:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:12:47.655 13:31:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:47.655 13:31:00 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:47.655 13:31:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.655 13:31:00 event -- common/autotest_common.sh@10 -- # set +x 00:12:47.655 ************************************ 00:12:47.655 START TEST cpu_locks 00:12:47.655 ************************************ 00:12:47.655 13:31:00 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:12:47.655 * Looking for test storage... 00:12:47.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:12:47.655 13:31:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:12:47.655 13:31:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:12:47.655 13:31:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:12:47.655 13:31:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:12:47.655 13:31:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:47.655 13:31:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:47.655 13:31:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:47.656 ************************************ 00:12:47.656 START TEST default_locks 00:12:47.656 ************************************ 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=76164 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 76164 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 76164 ']' 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:47.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:47.656 13:31:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:47.656 [2024-05-15 13:31:00.751962] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:47.656 [2024-05-15 13:31:00.752054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76164 ] 00:12:47.914 [2024-05-15 13:31:00.869419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:47.914 [2024-05-15 13:31:00.887495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.914 [2024-05-15 13:31:00.985507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.848 13:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:48.848 13:31:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:12:48.848 13:31:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 76164 00:12:48.848 13:31:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 76164 00:12:48.848 13:31:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 76164 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 76164 ']' 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 76164 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76164 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:49.107 killing process with pid 76164 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76164' 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 76164 00:12:49.107 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 76164 00:12:49.672 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 76164 00:12:49.672 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76164 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 76164 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 76164 ']' 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:49.673 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76164) - No such process 00:12:49.673 ERROR: process (pid: 76164) is no longer running 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:49.673 00:12:49.673 real 0m1.867s 00:12:49.673 user 0m1.984s 00:12:49.673 sys 0m0.584s 00:12:49.673 ************************************ 00:12:49.673 END TEST default_locks 00:12:49.673 ************************************ 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.673 13:31:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:12:49.673 13:31:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:12:49.673 13:31:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:49.673 13:31:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.673 13:31:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:49.673 ************************************ 00:12:49.673 START TEST default_locks_via_rpc 00:12:49.673 ************************************ 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=76228 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 76228 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76228 ']' 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:49.673 13:31:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.673 [2024-05-15 13:31:02.682307] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:49.673 [2024-05-15 13:31:02.682433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76228 ] 00:12:49.932 [2024-05-15 13:31:02.804311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:49.932 [2024-05-15 13:31:02.821425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.932 [2024-05-15 13:31:02.917512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 76228 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:50.866 13:31:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 76228 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 76228 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 76228 ']' 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 76228 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76228 00:12:51.127 killing process with pid 76228 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76228' 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 76228 00:12:51.127 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 76228 00:12:51.385 00:12:51.385 real 0m1.791s 00:12:51.385 user 0m1.925s 00:12:51.385 sys 0m0.512s 00:12:51.385 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:51.385 ************************************ 00:12:51.385 END TEST default_locks_via_rpc 00:12:51.385 ************************************ 00:12:51.385 13:31:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.385 13:31:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:12:51.385 13:31:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:51.385 13:31:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:51.385 13:31:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:51.385 ************************************ 00:12:51.385 START TEST non_locking_app_on_locked_coremask 00:12:51.385 ************************************ 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:12:51.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=76297 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 76297 /var/tmp/spdk.sock 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76297 ']' 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.385 13:31:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:51.643 [2024-05-15 13:31:04.523745] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:51.643 [2024-05-15 13:31:04.524106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76297 ] 00:12:51.643 [2024-05-15 13:31:04.646945] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.643 [2024-05-15 13:31:04.664919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.902 [2024-05-15 13:31:04.760433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=76325 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 76325 /var/tmp/spdk2.sock 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76325 ']' 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:52.468 13:31:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:52.726 [2024-05-15 13:31:05.573051] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:52.726 [2024-05-15 13:31:05.573363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76325 ] 00:12:52.726 [2024-05-15 13:31:05.692378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:52.726 [2024-05-15 13:31:05.715178] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:52.726 [2024-05-15 13:31:05.715233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.984 [2024-05-15 13:31:05.920289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.552 13:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:53.552 13:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:53.552 13:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 76297 00:12:53.552 13:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76297 00:12:53.552 13:31:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 76297 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76297 ']' 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76297 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76297 00:12:54.487 killing process with pid 76297 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76297' 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76297 00:12:54.487 13:31:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76297 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 76325 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76325 ']' 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76325 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76325 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:55.054 killing process with pid 76325 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76325' 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76325 00:12:55.054 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76325 00:12:55.629 ************************************ 00:12:55.629 END TEST non_locking_app_on_locked_coremask 00:12:55.629 ************************************ 00:12:55.629 00:12:55.629 real 0m4.033s 00:12:55.629 user 0m4.523s 00:12:55.629 sys 0m1.110s 00:12:55.629 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:55.629 13:31:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.629 13:31:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:12:55.629 13:31:08 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:55.629 13:31:08 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:55.629 13:31:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:55.629 ************************************ 00:12:55.629 START TEST locking_app_on_unlocked_coremask 00:12:55.629 ************************************ 00:12:55.629 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:12:55.629 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=76404 00:12:55.629 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:12:55.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.629 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 76404 /var/tmp/spdk.sock 00:12:55.629 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76404 ']' 00:12:55.629 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.630 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:55.630 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.630 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:55.630 13:31:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:55.630 [2024-05-15 13:31:08.598100] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:55.630 [2024-05-15 13:31:08.598741] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76404 ] 00:12:55.630 [2024-05-15 13:31:08.720501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:55.888 [2024-05-15 13:31:08.737532] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:12:55.888 [2024-05-15 13:31:08.737569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.888 [2024-05-15 13:31:08.833663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:12:56.821 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:56.821 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:56.821 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:12:56.821 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76432 00:12:56.821 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76432 /var/tmp/spdk2.sock 00:12:56.822 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76432 ']' 00:12:56.822 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:12:56.822 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.822 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:12:56.822 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.822 13:31:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:56.822 [2024-05-15 13:31:09.650721] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:56.822 [2024-05-15 13:31:09.651598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76432 ] 00:12:56.822 [2024-05-15 13:31:09.771385] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:56.822 [2024-05-15 13:31:09.795094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.080 [2024-05-15 13:31:09.979969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.645 13:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:57.645 13:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:12:57.645 13:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76432 00:12:57.645 13:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76432 00:12:57.645 13:31:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 76404 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76404 ']' 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76404 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76404 00:12:58.580 killing process with pid 76404 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76404' 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76404 00:12:58.580 13:31:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76404 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76432 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76432 ']' 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76432 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76432 00:12:59.515 killing process with pid 76432 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76432' 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76432 00:12:59.515 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76432 00:12:59.774 00:12:59.774 real 0m4.126s 00:12:59.774 user 0m4.559s 00:12:59.774 sys 0m1.187s 00:12:59.774 ************************************ 00:12:59.774 END TEST locking_app_on_unlocked_coremask 00:12:59.774 ************************************ 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:59.774 13:31:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:12:59.774 13:31:12 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:59.774 13:31:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:59.774 13:31:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:12:59.774 ************************************ 00:12:59.774 START TEST locking_app_on_locked_coremask 00:12:59.774 ************************************ 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76511 00:12:59.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76511 /var/tmp/spdk.sock 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76511 ']' 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:59.774 13:31:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:12:59.774 [2024-05-15 13:31:12.767307] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:12:59.774 [2024-05-15 13:31:12.767403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76511 ] 00:13:00.128 [2024-05-15 13:31:12.884826] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:00.128 [2024-05-15 13:31:12.902094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.128 [2024-05-15 13:31:12.997492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76539 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76539 /var/tmp/spdk2.sock 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76539 /var/tmp/spdk2.sock 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76539 /var/tmp/spdk2.sock 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76539 ']' 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:00.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:00.694 13:31:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:00.952 [2024-05-15 13:31:13.840277] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:00.952 [2024-05-15 13:31:13.840687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76539 ] 00:13:00.952 [2024-05-15 13:31:13.968369] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:00.952 [2024-05-15 13:31:13.992022] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76511 has claimed it. 00:13:00.952 [2024-05-15 13:31:13.992106] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:01.518 ERROR: process (pid: 76539) is no longer running 00:13:01.518 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76539) - No such process 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76511 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76511 00:13:01.518 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76511 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76511 ']' 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76511 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76511 00:13:02.084 killing process with pid 76511 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76511' 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76511 00:13:02.084 13:31:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76511 00:13:02.343 00:13:02.343 real 0m2.654s 00:13:02.343 user 0m3.098s 00:13:02.343 sys 0m0.675s 00:13:02.343 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.343 ************************************ 00:13:02.343 END TEST locking_app_on_locked_coremask 00:13:02.343 ************************************ 00:13:02.343 13:31:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.343 13:31:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:13:02.343 13:31:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:02.343 13:31:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.343 13:31:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:02.343 ************************************ 00:13:02.343 START TEST locking_overlapped_coremask 00:13:02.343 ************************************ 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:13:02.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76585 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76585 /var/tmp/spdk.sock 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76585 ']' 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.343 13:31:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:02.602 [2024-05-15 13:31:15.475160] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:02.602 [2024-05-15 13:31:15.475254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76585 ] 00:13:02.602 [2024-05-15 13:31:15.592902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:02.602 [2024-05-15 13:31:15.608928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:02.861 [2024-05-15 13:31:15.705533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.861 [2024-05-15 13:31:15.705444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.861 [2024-05-15 13:31:15.705528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76615 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76615 /var/tmp/spdk2.sock 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76615 /var/tmp/spdk2.sock 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76615 /var/tmp/spdk2.sock 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76615 ']' 00:13:03.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:03.428 13:31:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:03.686 [2024-05-15 13:31:16.538230] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:03.686 [2024-05-15 13:31:16.538337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76615 ] 00:13:03.686 [2024-05-15 13:31:16.662776] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:03.686 [2024-05-15 13:31:16.685323] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76585 has claimed it. 00:13:03.686 [2024-05-15 13:31:16.685383] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:13:04.252 ERROR: process (pid: 76615) is no longer running 00:13:04.252 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76615) - No such process 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76585 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 76585 ']' 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 76585 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76585 00:13:04.252 killing process with pid 76585 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76585' 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 76585 00:13:04.252 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 76585 00:13:04.820 00:13:04.820 real 0m2.243s 00:13:04.820 user 0m6.359s 00:13:04.820 sys 0m0.413s 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:04.820 ************************************ 00:13:04.820 END TEST locking_overlapped_coremask 00:13:04.820 ************************************ 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:13:04.820 13:31:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:13:04.820 13:31:17 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:04.820 13:31:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:04.820 13:31:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:04.820 ************************************ 00:13:04.820 START TEST locking_overlapped_coremask_via_rpc 00:13:04.820 ************************************ 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:13:04.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76667 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76667 /var/tmp/spdk.sock 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76667 ']' 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:04.820 13:31:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.820 [2024-05-15 13:31:17.765476] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:04.820 [2024-05-15 13:31:17.765631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76667 ] 00:13:04.820 [2024-05-15 13:31:17.884189] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:04.820 [2024-05-15 13:31:17.905315] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:04.820 [2024-05-15 13:31:17.905543] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:05.079 [2024-05-15 13:31:18.009724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.079 [2024-05-15 13:31:18.009869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.079 [2024-05-15 13:31:18.009875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76697 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76697 /var/tmp/spdk2.sock 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76697 ']' 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:05.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:05.725 13:31:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.006 [2024-05-15 13:31:18.800964] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:06.006 [2024-05-15 13:31:18.801059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76697 ] 00:13:06.006 [2024-05-15 13:31:18.922659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:06.006 [2024-05-15 13:31:18.946478] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:13:06.006 [2024-05-15 13:31:18.946524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.265 [2024-05-15 13:31:19.138884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.265 [2024-05-15 13:31:19.138995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:06.265 [2024-05-15 13:31:19.138996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.830 [2024-05-15 13:31:19.804856] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76667 has claimed it. 00:13:06.830 2024/05/15 13:31:19 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:13:06.830 request: 00:13:06.830 { 00:13:06.830 "method": "framework_enable_cpumask_locks", 00:13:06.830 "params": {} 00:13:06.830 } 00:13:06.830 Got JSON-RPC error response 00:13:06.830 GoRPCClient: error on JSON-RPC call 00:13:06.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76667 /var/tmp/spdk.sock 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76667 ']' 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:06.830 13:31:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76697 /var/tmp/spdk2.sock 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76697 ']' 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:13:07.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:07.087 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:13:07.344 00:13:07.344 real 0m2.590s 00:13:07.344 user 0m1.296s 00:13:07.344 sys 0m0.227s 00:13:07.344 ************************************ 00:13:07.344 END TEST locking_overlapped_coremask_via_rpc 00:13:07.344 ************************************ 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:07.344 13:31:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.344 13:31:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:13:07.344 13:31:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76667 ]] 00:13:07.344 13:31:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76667 00:13:07.344 13:31:20 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76667 ']' 00:13:07.344 13:31:20 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76667 00:13:07.344 13:31:20 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:13:07.344 13:31:20 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:07.345 13:31:20 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76667 00:13:07.345 killing process with pid 76667 00:13:07.345 13:31:20 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:07.345 13:31:20 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:07.345 13:31:20 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76667' 00:13:07.345 13:31:20 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76667 00:13:07.345 13:31:20 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76667 00:13:07.910 13:31:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76697 ]] 00:13:07.910 13:31:20 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76697 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76697 ']' 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76697 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76697 00:13:07.910 killing process with pid 76697 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76697' 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76697 00:13:07.910 13:31:20 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76697 00:13:08.168 13:31:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:08.168 13:31:21 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:13:08.168 13:31:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76667 ]] 00:13:08.168 13:31:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76667 00:13:08.168 13:31:21 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76667 ']' 00:13:08.168 13:31:21 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76667 00:13:08.168 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76667) - No such process 00:13:08.168 Process with pid 76667 is not found 00:13:08.168 Process with pid 76697 is not found 00:13:08.168 13:31:21 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76667 is not found' 00:13:08.168 13:31:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76697 ]] 00:13:08.169 13:31:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76697 00:13:08.169 13:31:21 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76697 ']' 00:13:08.169 13:31:21 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76697 00:13:08.169 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76697) - No such process 00:13:08.169 13:31:21 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76697 is not found' 00:13:08.169 13:31:21 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:13:08.169 00:13:08.169 real 0m20.534s 00:13:08.169 user 0m35.846s 00:13:08.169 sys 0m5.517s 00:13:08.169 13:31:21 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.169 13:31:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:13:08.169 ************************************ 00:13:08.169 END TEST cpu_locks 00:13:08.169 ************************************ 00:13:08.169 00:13:08.169 real 0m48.987s 00:13:08.169 user 1m35.265s 00:13:08.169 sys 0m9.411s 00:13:08.169 13:31:21 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:08.169 13:31:21 event -- common/autotest_common.sh@10 -- # set +x 00:13:08.169 ************************************ 00:13:08.169 END TEST event 00:13:08.169 ************************************ 00:13:08.169 13:31:21 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:08.169 13:31:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:08.169 13:31:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.169 13:31:21 -- common/autotest_common.sh@10 -- # set +x 00:13:08.169 ************************************ 00:13:08.169 START TEST thread 00:13:08.169 ************************************ 00:13:08.169 13:31:21 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:13:08.427 * Looking for test storage... 00:13:08.427 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:13:08.427 13:31:21 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:08.427 13:31:21 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:08.427 13:31:21 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.427 13:31:21 thread -- common/autotest_common.sh@10 -- # set +x 00:13:08.427 ************************************ 00:13:08.427 START TEST thread_poller_perf 00:13:08.427 ************************************ 00:13:08.427 13:31:21 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:13:08.427 [2024-05-15 13:31:21.332271] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:08.427 [2024-05-15 13:31:21.332581] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76843 ] 00:13:08.427 [2024-05-15 13:31:21.449252] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:08.427 [2024-05-15 13:31:21.470348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.685 [2024-05-15 13:31:21.572934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.685 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:13:09.619 ====================================== 00:13:09.619 busy:2208784275 (cyc) 00:13:09.619 total_run_count: 300000 00:13:09.619 tsc_hz: 2200000000 (cyc) 00:13:09.619 ====================================== 00:13:09.619 poller_cost: 7362 (cyc), 3346 (nsec) 00:13:09.619 00:13:09.619 real 0m1.337s 00:13:09.619 user 0m1.171s 00:13:09.619 sys 0m0.057s 00:13:09.619 13:31:22 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:09.619 13:31:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:09.619 ************************************ 00:13:09.619 END TEST thread_poller_perf 00:13:09.619 ************************************ 00:13:09.619 13:31:22 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:09.619 13:31:22 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:09.619 13:31:22 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:09.619 13:31:22 thread -- common/autotest_common.sh@10 -- # set +x 00:13:09.619 ************************************ 00:13:09.619 START TEST thread_poller_perf 00:13:09.619 ************************************ 00:13:09.620 13:31:22 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:13:09.879 [2024-05-15 13:31:22.719898] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:09.879 [2024-05-15 13:31:22.720003] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76873 ] 00:13:09.879 [2024-05-15 13:31:22.840000] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:09.879 [2024-05-15 13:31:22.859207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.879 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:13:09.879 [2024-05-15 13:31:22.953783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.250 ====================================== 00:13:11.250 busy:2202265500 (cyc) 00:13:11.250 total_run_count: 4203000 00:13:11.250 tsc_hz: 2200000000 (cyc) 00:13:11.250 ====================================== 00:13:11.250 poller_cost: 523 (cyc), 237 (nsec) 00:13:11.250 00:13:11.250 real 0m1.327s 00:13:11.250 user 0m1.167s 00:13:11.250 sys 0m0.052s 00:13:11.250 13:31:24 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.250 ************************************ 00:13:11.250 END TEST thread_poller_perf 00:13:11.250 ************************************ 00:13:11.250 13:31:24 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 13:31:24 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:13:11.250 ************************************ 00:13:11.250 END TEST thread 00:13:11.250 ************************************ 00:13:11.250 00:13:11.250 real 0m2.845s 00:13:11.250 user 0m2.412s 00:13:11.250 sys 0m0.208s 00:13:11.250 13:31:24 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:11.250 13:31:24 thread -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 13:31:24 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:11.250 13:31:24 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:11.250 13:31:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.250 13:31:24 -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 ************************************ 00:13:11.250 START TEST accel 00:13:11.250 ************************************ 00:13:11.250 13:31:24 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:13:11.250 * Looking for test storage... 00:13:11.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:11.250 13:31:24 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:13:11.250 13:31:24 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:13:11.250 13:31:24 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:11.250 13:31:24 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76949 00:13:11.250 13:31:24 accel -- accel/accel.sh@63 -- # waitforlisten 76949 00:13:11.250 13:31:24 accel -- common/autotest_common.sh@827 -- # '[' -z 76949 ']' 00:13:11.250 13:31:24 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:13:11.250 13:31:24 accel -- accel/accel.sh@61 -- # build_accel_config 00:13:11.250 13:31:24 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.250 13:31:24 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:11.250 13:31:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:11.250 13:31:24 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.250 13:31:24 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:11.250 13:31:24 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:11.250 13:31:24 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:11.250 13:31:24 accel -- common/autotest_common.sh@10 -- # set +x 00:13:11.250 13:31:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:11.250 13:31:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:11.250 13:31:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:11.250 13:31:24 accel -- accel/accel.sh@41 -- # jq -r . 00:13:11.250 [2024-05-15 13:31:24.265526] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:11.251 [2024-05-15 13:31:24.265685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76949 ] 00:13:11.508 [2024-05-15 13:31:24.395424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:11.508 [2024-05-15 13:31:24.407117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.508 [2024-05-15 13:31:24.501355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.440 13:31:25 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:12.440 13:31:25 accel -- common/autotest_common.sh@860 -- # return 0 00:13:12.441 13:31:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:13:12.441 13:31:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:13:12.441 13:31:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:13:12.441 13:31:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:13:12.441 13:31:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:13:12.441 13:31:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:13:12.441 13:31:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@10 -- # set +x 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # IFS== 00:13:12.441 13:31:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:13:12.441 13:31:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:13:12.441 13:31:25 accel -- accel/accel.sh@75 -- # killprocess 76949 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@946 -- # '[' -z 76949 ']' 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@950 -- # kill -0 76949 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@951 -- # uname 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76949 00:13:12.441 killing process with pid 76949 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76949' 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@965 -- # kill 76949 00:13:12.441 13:31:25 accel -- common/autotest_common.sh@970 -- # wait 76949 00:13:12.698 13:31:25 accel -- accel/accel.sh@76 -- # trap - ERR 00:13:12.698 13:31:25 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:13:12.698 13:31:25 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:12.698 13:31:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:12.698 13:31:25 accel -- common/autotest_common.sh@10 -- # set +x 00:13:12.698 13:31:25 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:13:12.698 13:31:25 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:13:12.698 13:31:25 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:13:12.698 13:31:25 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:12.699 13:31:25 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:12.699 13:31:25 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.699 13:31:25 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.699 13:31:25 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:12.699 13:31:25 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:13:12.699 13:31:25 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:13:12.699 13:31:25 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:12.699 13:31:25 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:13:12.699 13:31:25 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:13:12.699 13:31:25 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:12.699 13:31:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:12.699 13:31:25 accel -- common/autotest_common.sh@10 -- # set +x 00:13:12.699 ************************************ 00:13:12.699 START TEST accel_missing_filename 00:13:12.699 ************************************ 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:12.699 13:31:25 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:13:12.699 13:31:25 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:13:12.699 [2024-05-15 13:31:25.748246] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:12.699 [2024-05-15 13:31:25.748317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77017 ] 00:13:12.958 [2024-05-15 13:31:25.864925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:12.958 [2024-05-15 13:31:25.881920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.958 [2024-05-15 13:31:25.972879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.958 [2024-05-15 13:31:26.027585] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:13.217 [2024-05-15 13:31:26.107225] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:13.217 A filename is required. 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:13.217 00:13:13.217 real 0m0.454s 00:13:13.217 user 0m0.279s 00:13:13.217 sys 0m0.113s 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.217 ************************************ 00:13:13.217 END TEST accel_missing_filename 00:13:13.217 ************************************ 00:13:13.217 13:31:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:13:13.217 13:31:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:13.217 13:31:26 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:13:13.217 13:31:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.217 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.217 ************************************ 00:13:13.217 START TEST accel_compress_verify 00:13:13.217 ************************************ 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.217 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:13.217 13:31:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:13:13.217 [2024-05-15 13:31:26.252577] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:13.217 [2024-05-15 13:31:26.252680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77042 ] 00:13:13.475 [2024-05-15 13:31:26.373279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.475 [2024-05-15 13:31:26.390419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.475 [2024-05-15 13:31:26.484979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.475 [2024-05-15 13:31:26.539039] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:13.733 [2024-05-15 13:31:26.612830] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:13:13.733 00:13:13.733 Compression does not support the verify option, aborting. 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:13.733 00:13:13.733 real 0m0.461s 00:13:13.733 user 0m0.286s 00:13:13.733 sys 0m0.113s 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.733 ************************************ 00:13:13.733 END TEST accel_compress_verify 00:13:13.733 ************************************ 00:13:13.733 13:31:26 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:13:13.733 13:31:26 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:13:13.733 13:31:26 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:13.733 13:31:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.733 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.733 ************************************ 00:13:13.733 START TEST accel_wrong_workload 00:13:13.733 ************************************ 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.733 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:13:13.733 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:13:13.734 13:31:26 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:13:13.734 Unsupported workload type: foobar 00:13:13.734 [2024-05-15 13:31:26.762475] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:13:13.734 accel_perf options: 00:13:13.734 [-h help message] 00:13:13.734 [-q queue depth per core] 00:13:13.734 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:13.734 [-T number of threads per core 00:13:13.734 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:13.734 [-t time in seconds] 00:13:13.734 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:13.734 [ dif_verify, , dif_generate, dif_generate_copy 00:13:13.734 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:13.734 [-l for compress/decompress workloads, name of uncompressed input file 00:13:13.734 [-S for crc32c workload, use this seed value (default 0) 00:13:13.734 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:13.734 [-f for fill workload, use this BYTE value (default 255) 00:13:13.734 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:13.734 [-y verify result if this switch is on] 00:13:13.734 [-a tasks to allocate per core (default: same value as -q)] 00:13:13.734 Can be used to spread operations across a wider range of memory. 00:13:13.734 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:13:13.734 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:13.734 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:13.734 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:13.734 00:13:13.734 real 0m0.032s 00:13:13.734 user 0m0.018s 00:13:13.734 sys 0m0.014s 00:13:13.734 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.734 ************************************ 00:13:13.734 END TEST accel_wrong_workload 00:13:13.734 ************************************ 00:13:13.734 13:31:26 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 13:31:26 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:13:13.734 13:31:26 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:13:13.734 13:31:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.734 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.734 ************************************ 00:13:13.734 START TEST accel_negative_buffers 00:13:13.734 ************************************ 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:13.734 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:13:13.734 13:31:26 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:13:13.991 -x option must be non-negative. 00:13:13.991 [2024-05-15 13:31:26.836921] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:13:13.991 accel_perf options: 00:13:13.991 [-h help message] 00:13:13.991 [-q queue depth per core] 00:13:13.991 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:13:13.991 [-T number of threads per core 00:13:13.991 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:13:13.991 [-t time in seconds] 00:13:13.991 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:13:13.991 [ dif_verify, , dif_generate, dif_generate_copy 00:13:13.991 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:13:13.992 [-l for compress/decompress workloads, name of uncompressed input file 00:13:13.992 [-S for crc32c workload, use this seed value (default 0) 00:13:13.992 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:13:13.992 [-f for fill workload, use this BYTE value (default 255) 00:13:13.992 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:13:13.992 [-y verify result if this switch is on] 00:13:13.992 [-a tasks to allocate per core (default: same value as -q)] 00:13:13.992 Can be used to spread operations across a wider range of memory. 00:13:13.992 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:13:13.992 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:13.992 ************************************ 00:13:13.992 END TEST accel_negative_buffers 00:13:13.992 ************************************ 00:13:13.992 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:13.992 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:13.992 00:13:13.992 real 0m0.031s 00:13:13.992 user 0m0.017s 00:13:13.992 sys 0m0.014s 00:13:13.992 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.992 13:31:26 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 13:31:26 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:13:13.992 13:31:26 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:13.992 13:31:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.992 13:31:26 accel -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 ************************************ 00:13:13.992 START TEST accel_crc32c 00:13:13.992 ************************************ 00:13:13.992 13:31:26 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:13.992 13:31:26 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:13.992 [2024-05-15 13:31:26.908200] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:13.992 [2024-05-15 13:31:26.908302] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77106 ] 00:13:13.992 [2024-05-15 13:31:27.029044] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.992 [2024-05-15 13:31:27.049380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.250 [2024-05-15 13:31:27.146160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.250 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:14.251 13:31:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:15.266 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:15.525 13:31:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:13:15.525 13:31:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:15.525 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:15.525 13:31:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:15.525 13:31:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:15.525 13:31:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:15.525 13:31:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:15.525 00:13:15.525 real 0m1.480s 00:13:15.525 user 0m1.262s 00:13:15.525 sys 0m0.123s 00:13:15.525 13:31:28 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:15.525 13:31:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:15.525 ************************************ 00:13:15.525 END TEST accel_crc32c 00:13:15.525 ************************************ 00:13:15.525 13:31:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:13:15.525 13:31:28 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:15.525 13:31:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:15.525 13:31:28 accel -- common/autotest_common.sh@10 -- # set +x 00:13:15.525 ************************************ 00:13:15.525 START TEST accel_crc32c_C2 00:13:15.525 ************************************ 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:15.525 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:15.525 [2024-05-15 13:31:28.428728] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:15.525 [2024-05-15 13:31:28.428830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77140 ] 00:13:15.525 [2024-05-15 13:31:28.549584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:15.525 [2024-05-15 13:31:28.569013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.783 [2024-05-15 13:31:28.673418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:15.783 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:15.784 13:31:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:17.159 00:13:17.159 real 0m1.490s 00:13:17.159 user 0m1.277s 00:13:17.159 sys 0m0.120s 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.159 13:31:29 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 ************************************ 00:13:17.159 END TEST accel_crc32c_C2 00:13:17.159 ************************************ 00:13:17.159 13:31:29 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:13:17.159 13:31:29 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:17.159 13:31:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.159 13:31:29 accel -- common/autotest_common.sh@10 -- # set +x 00:13:17.159 ************************************ 00:13:17.159 START TEST accel_copy 00:13:17.159 ************************************ 00:13:17.159 13:31:29 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:17.159 13:31:29 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:13:17.159 [2024-05-15 13:31:29.981721] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:17.159 [2024-05-15 13:31:29.981865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77175 ] 00:13:17.159 [2024-05-15 13:31:30.111143] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:17.159 [2024-05-15 13:31:30.132510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.159 [2024-05-15 13:31:30.235054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.417 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:17.418 13:31:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:18.352 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:18.352 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:18.352 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:18.353 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:18.612 13:31:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:13:18.612 13:31:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:18.612 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:13:18.612 13:31:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:13:18.612 13:31:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:18.612 13:31:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:13:18.612 13:31:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:18.612 ************************************ 00:13:18.612 END TEST accel_copy 00:13:18.612 ************************************ 00:13:18.612 00:13:18.612 real 0m1.503s 00:13:18.612 user 0m1.282s 00:13:18.612 sys 0m0.125s 00:13:18.612 13:31:31 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:18.612 13:31:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:13:18.612 13:31:31 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:18.612 13:31:31 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:18.612 13:31:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.612 13:31:31 accel -- common/autotest_common.sh@10 -- # set +x 00:13:18.612 ************************************ 00:13:18.612 START TEST accel_fill 00:13:18.612 ************************************ 00:13:18.612 13:31:31 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:13:18.612 13:31:31 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:13:18.612 [2024-05-15 13:31:31.527748] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:18.612 [2024-05-15 13:31:31.528719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77208 ] 00:13:18.612 [2024-05-15 13:31:31.655786] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:18.612 [2024-05-15 13:31:31.675133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.871 [2024-05-15 13:31:31.769915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:18.871 13:31:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:13:20.277 13:31:32 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:20.277 00:13:20.277 real 0m1.482s 00:13:20.277 user 0m1.268s 00:13:20.277 sys 0m0.118s 00:13:20.277 ************************************ 00:13:20.277 13:31:32 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:20.277 13:31:32 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:13:20.277 END TEST accel_fill 00:13:20.277 ************************************ 00:13:20.277 13:31:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:13:20.277 13:31:33 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:20.277 13:31:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:20.277 13:31:33 accel -- common/autotest_common.sh@10 -- # set +x 00:13:20.277 ************************************ 00:13:20.277 START TEST accel_copy_crc32c 00:13:20.277 ************************************ 00:13:20.277 13:31:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:13:20.277 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:13:20.277 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:13:20.277 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.277 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:13:20.278 [2024-05-15 13:31:33.054740] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:20.278 [2024-05-15 13:31:33.055083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77244 ] 00:13:20.278 [2024-05-15 13:31:33.179891] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.278 [2024-05-15 13:31:33.197070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.278 [2024-05-15 13:31:33.290783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:20.278 13:31:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:21.651 00:13:21.651 real 0m1.471s 00:13:21.651 user 0m1.255s 00:13:21.651 sys 0m0.122s 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:21.651 13:31:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:13:21.651 ************************************ 00:13:21.651 END TEST accel_copy_crc32c 00:13:21.651 ************************************ 00:13:21.651 13:31:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:13:21.651 13:31:34 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:21.651 13:31:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.652 13:31:34 accel -- common/autotest_common.sh@10 -- # set +x 00:13:21.652 ************************************ 00:13:21.652 START TEST accel_copy_crc32c_C2 00:13:21.652 ************************************ 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:13:21.652 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:13:21.652 [2024-05-15 13:31:34.566622] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:21.652 [2024-05-15 13:31:34.566871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77273 ] 00:13:21.652 [2024-05-15 13:31:34.683484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:21.652 [2024-05-15 13:31:34.704721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.910 [2024-05-15 13:31:34.803134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.910 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:21.911 13:31:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:23.286 00:13:23.286 real 0m1.475s 00:13:23.286 user 0m1.262s 00:13:23.286 sys 0m0.120s 00:13:23.286 ************************************ 00:13:23.286 END TEST accel_copy_crc32c_C2 00:13:23.286 ************************************ 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.286 13:31:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:13:23.286 13:31:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:13:23.286 13:31:36 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:23.286 13:31:36 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.286 13:31:36 accel -- common/autotest_common.sh@10 -- # set +x 00:13:23.286 ************************************ 00:13:23.286 START TEST accel_dualcast 00:13:23.286 ************************************ 00:13:23.286 13:31:36 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:13:23.286 13:31:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:13:23.286 [2024-05-15 13:31:36.091432] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:23.286 [2024-05-15 13:31:36.091541] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77313 ] 00:13:23.286 [2024-05-15 13:31:36.212936] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:23.286 [2024-05-15 13:31:36.229327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.286 [2024-05-15 13:31:36.328492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:23.546 13:31:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:13:24.482 13:31:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:24.482 00:13:24.482 real 0m1.477s 00:13:24.482 user 0m1.271s 00:13:24.482 sys 0m0.112s 00:13:24.482 13:31:37 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:24.482 13:31:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:13:24.482 ************************************ 00:13:24.482 END TEST accel_dualcast 00:13:24.482 ************************************ 00:13:24.482 13:31:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:13:24.482 13:31:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:24.482 13:31:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:24.482 13:31:37 accel -- common/autotest_common.sh@10 -- # set +x 00:13:24.740 ************************************ 00:13:24.740 START TEST accel_compare 00:13:24.740 ************************************ 00:13:24.740 13:31:37 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:13:24.740 13:31:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:13:24.740 [2024-05-15 13:31:37.609740] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:24.740 [2024-05-15 13:31:37.609815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77342 ] 00:13:24.740 [2024-05-15 13:31:37.726098] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:24.740 [2024-05-15 13:31:37.741002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.740 [2024-05-15 13:31:37.826307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.998 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:24.999 13:31:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:13:25.936 13:31:39 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:25.936 00:13:25.936 real 0m1.447s 00:13:25.936 user 0m1.239s 00:13:25.936 sys 0m0.116s 00:13:26.196 13:31:39 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:26.196 13:31:39 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:13:26.196 ************************************ 00:13:26.196 END TEST accel_compare 00:13:26.196 ************************************ 00:13:26.196 13:31:39 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:13:26.196 13:31:39 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:26.196 13:31:39 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:26.196 13:31:39 accel -- common/autotest_common.sh@10 -- # set +x 00:13:26.196 ************************************ 00:13:26.196 START TEST accel_xor 00:13:26.196 ************************************ 00:13:26.196 13:31:39 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:26.196 13:31:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:26.196 [2024-05-15 13:31:39.108896] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:26.196 [2024-05-15 13:31:39.109034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77377 ] 00:13:26.196 [2024-05-15 13:31:39.238969] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:26.196 [2024-05-15 13:31:39.258570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.455 [2024-05-15 13:31:39.346452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:26.455 13:31:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.831 13:31:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:27.831 ************************************ 00:13:27.831 END TEST accel_xor 00:13:27.832 ************************************ 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:27.832 00:13:27.832 real 0m1.469s 00:13:27.832 user 0m1.249s 00:13:27.832 sys 0m0.127s 00:13:27.832 13:31:40 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:27.832 13:31:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:27.832 13:31:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:13:27.832 13:31:40 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:27.832 13:31:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:27.832 13:31:40 accel -- common/autotest_common.sh@10 -- # set +x 00:13:27.832 ************************************ 00:13:27.832 START TEST accel_xor 00:13:27.832 ************************************ 00:13:27.832 13:31:40 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:13:27.832 [2024-05-15 13:31:40.627112] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:27.832 [2024-05-15 13:31:40.627218] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77411 ] 00:13:27.832 [2024-05-15 13:31:40.747392] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:27.832 [2024-05-15 13:31:40.766902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.832 [2024-05-15 13:31:40.854301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:27.832 13:31:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:29.228 ************************************ 00:13:29.228 END TEST accel_xor 00:13:29.228 ************************************ 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:13:29.228 13:31:42 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:29.228 00:13:29.228 real 0m1.459s 00:13:29.228 user 0m1.259s 00:13:29.228 sys 0m0.107s 00:13:29.228 13:31:42 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:29.228 13:31:42 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:13:29.228 13:31:42 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:13:29.228 13:31:42 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:29.228 13:31:42 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:29.228 13:31:42 accel -- common/autotest_common.sh@10 -- # set +x 00:13:29.228 ************************************ 00:13:29.228 START TEST accel_dif_verify 00:13:29.228 ************************************ 00:13:29.228 13:31:42 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:13:29.228 13:31:42 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:13:29.228 [2024-05-15 13:31:42.135030] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:29.228 [2024-05-15 13:31:42.135133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77445 ] 00:13:29.228 [2024-05-15 13:31:42.255273] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:29.228 [2024-05-15 13:31:42.274821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.487 [2024-05-15 13:31:42.360122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:29.487 13:31:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:30.862 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 ************************************ 00:13:30.863 END TEST accel_dif_verify 00:13:30.863 ************************************ 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:13:30.863 13:31:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:30.863 00:13:30.863 real 0m1.456s 00:13:30.863 user 0m1.244s 00:13:30.863 sys 0m0.121s 00:13:30.863 13:31:43 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.863 13:31:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:13:30.863 13:31:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:13:30.863 13:31:43 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:30.863 13:31:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.863 13:31:43 accel -- common/autotest_common.sh@10 -- # set +x 00:13:30.863 ************************************ 00:13:30.863 START TEST accel_dif_generate 00:13:30.863 ************************************ 00:13:30.863 13:31:43 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:13:30.863 [2024-05-15 13:31:43.641153] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:30.863 [2024-05-15 13:31:43.641243] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77480 ] 00:13:30.863 [2024-05-15 13:31:43.761148] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:30.863 [2024-05-15 13:31:43.779728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.863 [2024-05-15 13:31:43.873787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:30.863 13:31:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:13:32.241 13:31:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:32.241 00:13:32.241 real 0m1.467s 00:13:32.241 user 0m1.261s 00:13:32.241 sys 0m0.115s 00:13:32.241 13:31:45 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:32.241 13:31:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:13:32.241 ************************************ 00:13:32.241 END TEST accel_dif_generate 00:13:32.241 ************************************ 00:13:32.241 13:31:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:13:32.241 13:31:45 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:13:32.241 13:31:45 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:32.241 13:31:45 accel -- common/autotest_common.sh@10 -- # set +x 00:13:32.241 ************************************ 00:13:32.241 START TEST accel_dif_generate_copy 00:13:32.241 ************************************ 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:13:32.241 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:13:32.241 [2024-05-15 13:31:45.155638] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:32.241 [2024-05-15 13:31:45.155724] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77513 ] 00:13:32.241 [2024-05-15 13:31:45.275957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:32.241 [2024-05-15 13:31:45.293123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.501 [2024-05-15 13:31:45.385556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:32.501 13:31:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:33.877 00:13:33.877 real 0m1.460s 00:13:33.877 user 0m1.256s 00:13:33.877 sys 0m0.112s 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.877 13:31:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:13:33.877 ************************************ 00:13:33.877 END TEST accel_dif_generate_copy 00:13:33.877 ************************************ 00:13:33.877 13:31:46 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:13:33.877 13:31:46 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:33.877 13:31:46 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:13:33.877 13:31:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.877 13:31:46 accel -- common/autotest_common.sh@10 -- # set +x 00:13:33.877 ************************************ 00:13:33.877 START TEST accel_comp 00:13:33.877 ************************************ 00:13:33.877 13:31:46 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:33.877 13:31:46 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:13:33.877 13:31:46 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:13:33.877 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.877 13:31:46 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:33.877 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.877 13:31:46 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:13:33.878 [2024-05-15 13:31:46.667213] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:33.878 [2024-05-15 13:31:46.667298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77549 ] 00:13:33.878 [2024-05-15 13:31:46.787238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:33.878 [2024-05-15 13:31:46.804593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.878 [2024-05-15 13:31:46.900278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:33.878 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:34.137 13:31:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:35.073 13:31:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:35.073 13:31:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.073 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:35.073 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:13:35.074 13:31:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:35.074 00:13:35.074 real 0m1.474s 00:13:35.074 user 0m1.264s 00:13:35.074 sys 0m0.118s 00:13:35.074 13:31:48 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:35.074 13:31:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:13:35.074 ************************************ 00:13:35.074 END TEST accel_comp 00:13:35.074 ************************************ 00:13:35.074 13:31:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:35.074 13:31:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:13:35.074 13:31:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.074 13:31:48 accel -- common/autotest_common.sh@10 -- # set +x 00:13:35.074 ************************************ 00:13:35.074 START TEST accel_decomp 00:13:35.074 ************************************ 00:13:35.074 13:31:48 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:13:35.074 13:31:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:13:35.333 [2024-05-15 13:31:48.186598] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:35.333 [2024-05-15 13:31:48.186691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77578 ] 00:13:35.333 [2024-05-15 13:31:48.302755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:35.333 [2024-05-15 13:31:48.320500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.333 [2024-05-15 13:31:48.418022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.591 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:35.592 13:31:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:36.526 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:36.784 ************************************ 00:13:36.784 END TEST accel_decomp 00:13:36.784 ************************************ 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:36.785 13:31:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:36.785 00:13:36.785 real 0m1.462s 00:13:36.785 user 0m1.268s 00:13:36.785 sys 0m0.102s 00:13:36.785 13:31:49 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.785 13:31:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 13:31:49 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:36.785 13:31:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:36.785 13:31:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.785 13:31:49 accel -- common/autotest_common.sh@10 -- # set +x 00:13:36.785 ************************************ 00:13:36.785 START TEST accel_decmop_full 00:13:36.785 ************************************ 00:13:36.785 13:31:49 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:13:36.785 13:31:49 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:13:36.785 [2024-05-15 13:31:49.695521] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:36.785 [2024-05-15 13:31:49.695895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77618 ] 00:13:36.785 [2024-05-15 13:31:49.817325] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:36.785 [2024-05-15 13:31:49.835420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.042 [2024-05-15 13:31:49.930963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.042 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:37.043 13:31:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:37.043 13:31:50 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:13:38.417 ************************************ 00:13:38.417 END TEST accel_decmop_full 00:13:38.417 ************************************ 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:38.417 13:31:51 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:38.417 00:13:38.417 real 0m1.480s 00:13:38.417 user 0m1.280s 00:13:38.417 sys 0m0.109s 00:13:38.417 13:31:51 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:38.417 13:31:51 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:13:38.417 13:31:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:38.417 13:31:51 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:38.417 13:31:51 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:38.417 13:31:51 accel -- common/autotest_common.sh@10 -- # set +x 00:13:38.417 ************************************ 00:13:38.417 START TEST accel_decomp_mcore 00:13:38.417 ************************************ 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:38.417 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:38.417 [2024-05-15 13:31:51.228148] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:38.417 [2024-05-15 13:31:51.228239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77647 ] 00:13:38.417 [2024-05-15 13:31:51.348504] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:38.417 [2024-05-15 13:31:51.362557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:38.417 [2024-05-15 13:31:51.458097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.417 [2024-05-15 13:31:51.458235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.417 [2024-05-15 13:31:51.458347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:38.417 [2024-05-15 13:31:51.458457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:38.675 13:31:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:13:39.608 ************************************ 00:13:39.608 END TEST accel_decomp_mcore 00:13:39.608 ************************************ 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:39.608 00:13:39.608 real 0m1.476s 00:13:39.608 user 0m4.647s 00:13:39.608 sys 0m0.125s 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:39.608 13:31:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:39.867 13:31:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:39.867 13:31:52 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:39.867 13:31:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:39.867 13:31:52 accel -- common/autotest_common.sh@10 -- # set +x 00:13:39.867 ************************************ 00:13:39.867 START TEST accel_decomp_full_mcore 00:13:39.867 ************************************ 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:13:39.867 13:31:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:13:39.867 [2024-05-15 13:31:52.747513] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:39.867 [2024-05-15 13:31:52.747592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77690 ] 00:13:39.867 [2024-05-15 13:31:52.865009] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:39.867 [2024-05-15 13:31:52.882363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.125 [2024-05-15 13:31:52.980265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.125 [2024-05-15 13:31:52.980419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.125 [2024-05-15 13:31:52.980561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.125 [2024-05-15 13:31:52.980685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.125 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:40.126 13:31:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 ************************************ 00:13:41.497 END TEST accel_decomp_full_mcore 00:13:41.497 ************************************ 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:41.497 00:13:41.497 real 0m1.485s 00:13:41.497 user 0m4.702s 00:13:41.497 sys 0m0.125s 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:41.497 13:31:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 13:31:54 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:41.497 13:31:54 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:41.497 13:31:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:41.497 13:31:54 accel -- common/autotest_common.sh@10 -- # set +x 00:13:41.497 ************************************ 00:13:41.497 START TEST accel_decomp_mthread 00:13:41.497 ************************************ 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:41.497 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:41.497 [2024-05-15 13:31:54.281203] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:41.497 [2024-05-15 13:31:54.281283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77722 ] 00:13:41.498 [2024-05-15 13:31:54.397452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:41.498 [2024-05-15 13:31:54.411345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.498 [2024-05-15 13:31:54.515412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:41.498 13:31:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:42.894 00:13:42.894 real 0m1.475s 00:13:42.894 user 0m1.264s 00:13:42.894 sys 0m0.119s 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:42.894 ************************************ 00:13:42.894 END TEST accel_decomp_mthread 00:13:42.894 ************************************ 00:13:42.894 13:31:55 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:42.894 13:31:55 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:42.894 13:31:55 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:42.894 13:31:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:42.894 13:31:55 accel -- common/autotest_common.sh@10 -- # set +x 00:13:42.894 ************************************ 00:13:42.894 START TEST accel_decomp_full_mthread 00:13:42.894 ************************************ 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:13:42.894 13:31:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:13:42.894 [2024-05-15 13:31:55.798770] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:42.895 [2024-05-15 13:31:55.798861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77762 ] 00:13:42.895 [2024-05-15 13:31:55.915366] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:42.895 [2024-05-15 13:31:55.927596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.152 [2024-05-15 13:31:56.030176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:43.152 13:31:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:13:44.521 00:13:44.521 real 0m1.511s 00:13:44.521 user 0m1.300s 00:13:44.521 sys 0m0.121s 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.521 13:31:57 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:13:44.521 ************************************ 00:13:44.521 END TEST accel_decomp_full_mthread 00:13:44.521 ************************************ 00:13:44.521 13:31:57 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:13:44.521 13:31:57 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:44.521 13:31:57 accel -- accel/accel.sh@137 -- # build_accel_config 00:13:44.521 13:31:57 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:44.521 13:31:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:13:44.521 13:31:57 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.521 13:31:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:13:44.521 13:31:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:13:44.521 13:31:57 accel -- common/autotest_common.sh@10 -- # set +x 00:13:44.521 13:31:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:13:44.521 13:31:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:13:44.521 13:31:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:13:44.521 13:31:57 accel -- accel/accel.sh@41 -- # jq -r . 00:13:44.521 ************************************ 00:13:44.521 START TEST accel_dif_functional_tests 00:13:44.521 ************************************ 00:13:44.521 13:31:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:13:44.521 [2024-05-15 13:31:57.380223] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:44.521 [2024-05-15 13:31:57.381139] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77792 ] 00:13:44.522 [2024-05-15 13:31:57.503089] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:44.522 [2024-05-15 13:31:57.519760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.522 [2024-05-15 13:31:57.615076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.522 [2024-05-15 13:31:57.615216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.522 [2024-05-15 13:31:57.615223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.779 00:13:44.779 00:13:44.779 CUnit - A unit testing framework for C - Version 2.1-3 00:13:44.779 http://cunit.sourceforge.net/ 00:13:44.779 00:13:44.779 00:13:44.779 Suite: accel_dif 00:13:44.779 Test: verify: DIF generated, GUARD check ...passed 00:13:44.779 Test: verify: DIF generated, APPTAG check ...passed 00:13:44.779 Test: verify: DIF generated, REFTAG check ...passed 00:13:44.779 Test: verify: DIF not generated, GUARD check ...[2024-05-15 13:31:57.701827] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:44.779 passed 00:13:44.779 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 13:31:57.702150] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:13:44.779 [2024-05-15 13:31:57.702262] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:44.779 passed 00:13:44.779 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 13:31:57.702391] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:13:44.779 [2024-05-15 13:31:57.702534] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:44.779 [2024-05-15 13:31:57.702664] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:13:44.779 passed 00:13:44.779 Test: verify: APPTAG correct, APPTAG check ...passed 00:13:44.779 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 13:31:57.702778] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:13:44.779 passed 00:13:44.779 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:13:44.779 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:13:44.779 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:13:44.779 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:13:44.779 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 13:31:57.703019] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:13:44.779 passed 00:13:44.779 Test: generate copy: DIF generated, APTTAG check ...passed 00:13:44.779 Test: generate copy: DIF generated, REFTAG check ...passed 00:13:44.779 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:13:44.779 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:13:44.779 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:13:44.779 Test: generate copy: iovecs-len validate ...[2024-05-15 13:31:57.703462] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:13:44.779 passed 00:13:44.779 Test: generate copy: buffer alignment validate ...passed 00:13:44.779 00:13:44.779 Run Summary: Type Total Ran Passed Failed Inactive 00:13:44.779 suites 1 1 n/a 0 0 00:13:44.779 tests 20 20 20 0 0 00:13:44.779 asserts 204 204 204 0 n/a 00:13:44.779 00:13:44.779 Elapsed time = 0.006 seconds 00:13:45.037 00:13:45.037 real 0m0.560s 00:13:45.037 user 0m0.733s 00:13:45.037 sys 0m0.160s 00:13:45.037 13:31:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.037 13:31:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:13:45.037 ************************************ 00:13:45.037 END TEST accel_dif_functional_tests 00:13:45.037 ************************************ 00:13:45.037 00:13:45.037 real 0m33.824s 00:13:45.037 user 0m35.587s 00:13:45.037 sys 0m3.879s 00:13:45.037 13:31:57 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.037 13:31:57 accel -- common/autotest_common.sh@10 -- # set +x 00:13:45.037 ************************************ 00:13:45.037 END TEST accel 00:13:45.037 ************************************ 00:13:45.037 13:31:57 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:45.037 13:31:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:45.037 13:31:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:45.037 13:31:57 -- common/autotest_common.sh@10 -- # set +x 00:13:45.037 ************************************ 00:13:45.037 START TEST accel_rpc 00:13:45.037 ************************************ 00:13:45.037 13:31:57 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:13:45.037 * Looking for test storage... 00:13:45.037 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:13:45.037 13:31:58 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:13:45.037 13:31:58 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77862 00:13:45.037 13:31:58 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:13:45.037 13:31:58 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77862 00:13:45.037 13:31:58 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 77862 ']' 00:13:45.037 13:31:58 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.037 13:31:58 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:45.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.037 13:31:58 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.037 13:31:58 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:45.037 13:31:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.037 [2024-05-15 13:31:58.128095] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:45.037 [2024-05-15 13:31:58.128202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77862 ] 00:13:45.293 [2024-05-15 13:31:58.249113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:45.293 [2024-05-15 13:31:58.264874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.293 [2024-05-15 13:31:58.344768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.230 13:31:59 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:46.230 13:31:59 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:13:46.230 13:31:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:13:46.230 13:31:59 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:13:46.230 13:31:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:13:46.230 13:31:59 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:13:46.230 13:31:59 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:13:46.230 13:31:59 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:46.230 13:31:59 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:46.230 13:31:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:46.230 ************************************ 00:13:46.230 START TEST accel_assign_opcode 00:13:46.230 ************************************ 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:46.230 [2024-05-15 13:31:59.137280] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:46.230 [2024-05-15 13:31:59.145272] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.230 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.488 software 00:13:46.488 00:13:46.488 real 0m0.295s 00:13:46.488 user 0m0.051s 00:13:46.488 sys 0m0.014s 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.488 13:31:59 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:13:46.488 ************************************ 00:13:46.488 END TEST accel_assign_opcode 00:13:46.488 ************************************ 00:13:46.488 13:31:59 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77862 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 77862 ']' 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 77862 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77862 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:46.488 killing process with pid 77862 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77862' 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@965 -- # kill 77862 00:13:46.488 13:31:59 accel_rpc -- common/autotest_common.sh@970 -- # wait 77862 00:13:47.052 00:13:47.052 real 0m1.877s 00:13:47.052 user 0m1.991s 00:13:47.052 sys 0m0.453s 00:13:47.052 13:31:59 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:47.052 ************************************ 00:13:47.052 END TEST accel_rpc 00:13:47.052 ************************************ 00:13:47.052 13:31:59 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.052 13:31:59 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:47.052 13:31:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:47.052 13:31:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:47.052 13:31:59 -- common/autotest_common.sh@10 -- # set +x 00:13:47.052 ************************************ 00:13:47.052 START TEST app_cmdline 00:13:47.052 ************************************ 00:13:47.052 13:31:59 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:13:47.052 * Looking for test storage... 00:13:47.052 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:47.052 13:31:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:13:47.052 13:31:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77973 00:13:47.052 13:31:59 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:13:47.052 13:31:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77973 00:13:47.052 13:31:59 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 77973 ']' 00:13:47.052 13:31:59 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.052 13:31:59 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:47.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.052 13:31:59 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.052 13:31:59 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:47.052 13:31:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:47.052 [2024-05-15 13:32:00.040798] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:13:47.052 [2024-05-15 13:32:00.040911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77973 ] 00:13:47.309 [2024-05-15 13:32:00.162446] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:47.309 [2024-05-15 13:32:00.182731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.309 [2024-05-15 13:32:00.274177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.241 13:32:01 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:48.241 13:32:01 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:13:48.241 { 00:13:48.241 "fields": { 00:13:48.241 "commit": "253cca4fc", 00:13:48.241 "major": 24, 00:13:48.241 "minor": 5, 00:13:48.241 "patch": 0, 00:13:48.241 "suffix": "-pre" 00:13:48.241 }, 00:13:48.241 "version": "SPDK v24.05-pre git sha1 253cca4fc" 00:13:48.241 } 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:13:48.241 13:32:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:13:48.241 13:32:01 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.241 13:32:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.498 13:32:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:13:48.498 13:32:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:13:48.498 13:32:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:48.498 13:32:01 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:13:48.755 2024/05/15 13:32:01 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:13:48.755 request: 00:13:48.755 { 00:13:48.755 "method": "env_dpdk_get_mem_stats", 00:13:48.755 "params": {} 00:13:48.755 } 00:13:48.755 Got JSON-RPC error response 00:13:48.755 GoRPCClient: error on JSON-RPC call 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:48.755 13:32:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77973 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 77973 ']' 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 77973 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77973 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:48.755 killing process with pid 77973 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77973' 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@965 -- # kill 77973 00:13:48.755 13:32:01 app_cmdline -- common/autotest_common.sh@970 -- # wait 77973 00:13:49.011 00:13:49.011 real 0m2.165s 00:13:49.011 user 0m2.757s 00:13:49.011 sys 0m0.481s 00:13:49.011 13:32:02 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.011 13:32:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:13:49.011 ************************************ 00:13:49.011 END TEST app_cmdline 00:13:49.011 ************************************ 00:13:49.011 13:32:02 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:49.011 13:32:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:49.011 13:32:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.011 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:13:49.011 ************************************ 00:13:49.011 START TEST version 00:13:49.011 ************************************ 00:13:49.011 13:32:02 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:13:49.270 * Looking for test storage... 00:13:49.270 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:13:49.270 13:32:02 version -- app/version.sh@17 -- # get_header_version major 00:13:49.270 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:13:49.270 13:32:02 version -- app/version.sh@17 -- # major=24 00:13:49.270 13:32:02 version -- app/version.sh@18 -- # get_header_version minor 00:13:49.270 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:13:49.270 13:32:02 version -- app/version.sh@18 -- # minor=5 00:13:49.270 13:32:02 version -- app/version.sh@19 -- # get_header_version patch 00:13:49.270 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:13:49.270 13:32:02 version -- app/version.sh@19 -- # patch=0 00:13:49.270 13:32:02 version -- app/version.sh@20 -- # get_header_version suffix 00:13:49.270 13:32:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # cut -f2 00:13:49.270 13:32:02 version -- app/version.sh@14 -- # tr -d '"' 00:13:49.270 13:32:02 version -- app/version.sh@20 -- # suffix=-pre 00:13:49.270 13:32:02 version -- app/version.sh@22 -- # version=24.5 00:13:49.270 13:32:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:13:49.270 13:32:02 version -- app/version.sh@28 -- # version=24.5rc0 00:13:49.270 13:32:02 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:49.270 13:32:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:13:49.270 13:32:02 version -- app/version.sh@30 -- # py_version=24.5rc0 00:13:49.270 13:32:02 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:13:49.270 ************************************ 00:13:49.270 END TEST version 00:13:49.270 ************************************ 00:13:49.270 00:13:49.270 real 0m0.147s 00:13:49.270 user 0m0.087s 00:13:49.270 sys 0m0.093s 00:13:49.270 13:32:02 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.270 13:32:02 version -- common/autotest_common.sh@10 -- # set +x 00:13:49.270 13:32:02 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@194 -- # uname -s 00:13:49.270 13:32:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:13:49.270 13:32:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:49.270 13:32:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:13:49.270 13:32:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@256 -- # timing_exit lib 00:13:49.270 13:32:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:49.270 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:13:49.270 13:32:02 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:13:49.270 13:32:02 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:13:49.270 13:32:02 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:49.270 13:32:02 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:49.270 13:32:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.270 13:32:02 -- common/autotest_common.sh@10 -- # set +x 00:13:49.270 ************************************ 00:13:49.270 START TEST nvmf_tcp 00:13:49.270 ************************************ 00:13:49.270 13:32:02 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:13:49.528 * Looking for test storage... 00:13:49.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.528 13:32:02 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.528 13:32:02 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.528 13:32:02 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.528 13:32:02 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:13:49.528 13:32:02 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:13:49.528 13:32:02 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:49.528 13:32:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:13:49.528 13:32:02 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:49.528 13:32:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:49.528 13:32:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.528 13:32:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.528 ************************************ 00:13:49.528 START TEST nvmf_example 00:13:49.528 ************************************ 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:49.528 * Looking for test storage... 00:13:49.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:49.528 Cannot find device "nvmf_init_br" 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:13:49.528 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:49.786 Cannot find device "nvmf_tgt_br" 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:49.786 Cannot find device "nvmf_tgt_br2" 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:49.786 Cannot find device "nvmf_init_br" 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:49.786 Cannot find device "nvmf_tgt_br" 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:49.786 Cannot find device "nvmf_tgt_br2" 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:49.786 Cannot find device "nvmf_br" 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:49.786 Cannot find device "nvmf_init_if" 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:49.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:49.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.786 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.787 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:50.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:13:50.045 00:13:50.045 --- 10.0.0.2 ping statistics --- 00:13:50.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.045 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:50.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:50.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:13:50.045 00:13:50.045 --- 10.0.0.3 ping statistics --- 00:13:50.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.045 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:50.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:13:50.045 00:13:50.045 --- 10.0.0.1 ping statistics --- 00:13:50.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.045 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:50.045 13:32:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:50.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=78335 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 78335 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 78335 ']' 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.045 13:32:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:13:51.417 13:32:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:01.385 Initializing NVMe Controllers 00:14:01.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:01.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:01.385 Initialization complete. Launching workers. 00:14:01.385 ======================================================== 00:14:01.385 Latency(us) 00:14:01.385 Device Information : IOPS MiB/s Average min max 00:14:01.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15297.92 59.76 4183.44 751.12 21218.50 00:14:01.385 ======================================================== 00:14:01.385 Total : 15297.92 59.76 4183.44 751.12 21218.50 00:14:01.385 00:14:01.385 13:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:01.385 13:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:01.385 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:01.385 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:01.643 rmmod nvme_tcp 00:14:01.643 rmmod nvme_fabrics 00:14:01.643 rmmod nvme_keyring 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 78335 ']' 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 78335 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 78335 ']' 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 78335 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78335 00:14:01.643 killing process with pid 78335 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78335' 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 78335 00:14:01.643 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 78335 00:14:01.901 nvmf threads initialize successfully 00:14:01.901 bdev subsystem init successfully 00:14:01.901 created a nvmf target service 00:14:01.901 create targets's poll groups done 00:14:01.901 all subsystems of target started 00:14:01.901 nvmf target is running 00:14:01.901 all subsystems of target stopped 00:14:01.901 destroy targets's poll groups done 00:14:01.901 destroyed the nvmf target service 00:14:01.901 bdev subsystem finish successfully 00:14:01.901 nvmf threads destroy successfully 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.901 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.901 00:14:01.901 real 0m12.405s 00:14:01.901 user 0m44.610s 00:14:01.901 sys 0m1.942s 00:14:01.902 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:01.902 13:32:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:01.902 ************************************ 00:14:01.902 END TEST nvmf_example 00:14:01.902 ************************************ 00:14:01.902 13:32:14 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:01.902 13:32:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:01.902 13:32:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:01.902 13:32:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:01.902 ************************************ 00:14:01.902 START TEST nvmf_filesystem 00:14:01.902 ************************************ 00:14:01.902 13:32:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:01.902 * Looking for test storage... 00:14:02.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:02.163 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:02.164 #define SPDK_CONFIG_H 00:14:02.164 #define SPDK_CONFIG_APPS 1 00:14:02.164 #define SPDK_CONFIG_ARCH native 00:14:02.164 #undef SPDK_CONFIG_ASAN 00:14:02.164 #define SPDK_CONFIG_AVAHI 1 00:14:02.164 #undef SPDK_CONFIG_CET 00:14:02.164 #define SPDK_CONFIG_COVERAGE 1 00:14:02.164 #define SPDK_CONFIG_CROSS_PREFIX 00:14:02.164 #undef SPDK_CONFIG_CRYPTO 00:14:02.164 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:02.164 #undef SPDK_CONFIG_CUSTOMOCF 00:14:02.164 #undef SPDK_CONFIG_DAOS 00:14:02.164 #define SPDK_CONFIG_DAOS_DIR 00:14:02.164 #define SPDK_CONFIG_DEBUG 1 00:14:02.164 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:02.164 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:14:02.164 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:14:02.164 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:14:02.164 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:02.164 #undef SPDK_CONFIG_DPDK_UADK 00:14:02.164 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:02.164 #define SPDK_CONFIG_EXAMPLES 1 00:14:02.164 #undef SPDK_CONFIG_FC 00:14:02.164 #define SPDK_CONFIG_FC_PATH 00:14:02.164 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:02.164 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:02.164 #undef SPDK_CONFIG_FUSE 00:14:02.164 #undef SPDK_CONFIG_FUZZER 00:14:02.164 #define SPDK_CONFIG_FUZZER_LIB 00:14:02.164 #define SPDK_CONFIG_GOLANG 1 00:14:02.164 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:02.164 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:02.164 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:02.164 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:14:02.164 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:02.164 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:02.164 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:02.164 #define SPDK_CONFIG_IDXD 1 00:14:02.164 #undef SPDK_CONFIG_IDXD_KERNEL 00:14:02.164 #undef SPDK_CONFIG_IPSEC_MB 00:14:02.164 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:02.164 #define SPDK_CONFIG_ISAL 1 00:14:02.164 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:02.164 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:02.164 #define SPDK_CONFIG_LIBDIR 00:14:02.164 #undef SPDK_CONFIG_LTO 00:14:02.164 #define SPDK_CONFIG_MAX_LCORES 00:14:02.164 #define SPDK_CONFIG_NVME_CUSE 1 00:14:02.164 #undef SPDK_CONFIG_OCF 00:14:02.164 #define SPDK_CONFIG_OCF_PATH 00:14:02.164 #define SPDK_CONFIG_OPENSSL_PATH 00:14:02.164 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:02.164 #define SPDK_CONFIG_PGO_DIR 00:14:02.164 #undef SPDK_CONFIG_PGO_USE 00:14:02.164 #define SPDK_CONFIG_PREFIX /usr/local 00:14:02.164 #undef SPDK_CONFIG_RAID5F 00:14:02.164 #undef SPDK_CONFIG_RBD 00:14:02.164 #define SPDK_CONFIG_RDMA 1 00:14:02.164 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:02.164 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:02.164 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:02.164 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:02.164 #define SPDK_CONFIG_SHARED 1 00:14:02.164 #undef SPDK_CONFIG_SMA 00:14:02.164 #define SPDK_CONFIG_TESTS 1 00:14:02.164 #undef SPDK_CONFIG_TSAN 00:14:02.164 #define SPDK_CONFIG_UBLK 1 00:14:02.164 #define SPDK_CONFIG_UBSAN 1 00:14:02.164 #undef SPDK_CONFIG_UNIT_TESTS 00:14:02.164 #undef SPDK_CONFIG_URING 00:14:02.164 #define SPDK_CONFIG_URING_PATH 00:14:02.164 #undef SPDK_CONFIG_URING_ZNS 00:14:02.164 #define SPDK_CONFIG_USDT 1 00:14:02.164 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:02.164 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:02.164 #undef SPDK_CONFIG_VFIO_USER 00:14:02.164 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:02.164 #define SPDK_CONFIG_VHOST 1 00:14:02.164 #define SPDK_CONFIG_VIRTIO 1 00:14:02.164 #undef SPDK_CONFIG_VTUNE 00:14:02.164 #define SPDK_CONFIG_VTUNE_DIR 00:14:02.164 #define SPDK_CONFIG_WERROR 1 00:14:02.164 #define SPDK_CONFIG_WPDK_DIR 00:14:02.164 #undef SPDK_CONFIG_XNVME 00:14:02.164 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:02.164 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : main 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:02.165 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 78585 ]] 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 78585 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.4Lngrv 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.4Lngrv/tests/target /tmp/spdk.4Lngrv 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264512512 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267887616 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494353408 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507157504 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:14:02.166 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12019802112 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5949022208 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12019802112 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5949022208 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267752448 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267891712 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=139264 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92019601408 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=7683178496 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:14:02.167 * Looking for test storage... 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=12019802112 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.167 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:02.168 Cannot find device "nvmf_tgt_br" 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:02.168 Cannot find device "nvmf_tgt_br2" 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:02.168 Cannot find device "nvmf_tgt_br" 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:02.168 Cannot find device "nvmf_tgt_br2" 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:14:02.168 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:02.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:02.426 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:02.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:02.426 00:14:02.426 --- 10.0.0.2 ping statistics --- 00:14:02.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.426 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:02.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:02.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:14:02.426 00:14:02.426 --- 10.0.0.3 ping statistics --- 00:14:02.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.426 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:02.426 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:02.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:14:02.426 00:14:02.426 --- 10.0.0.1 ping statistics --- 00:14:02.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.426 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:02.684 ************************************ 00:14:02.684 START TEST nvmf_filesystem_no_in_capsule 00:14:02.684 ************************************ 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:02.684 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=78737 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 78737 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 78737 ']' 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.685 13:32:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:02.685 [2024-05-15 13:32:15.624133] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:02.685 [2024-05-15 13:32:15.624223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.685 [2024-05-15 13:32:15.751854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:02.685 [2024-05-15 13:32:15.772109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.943 [2024-05-15 13:32:15.886161] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.943 [2024-05-15 13:32:15.886230] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.943 [2024-05-15 13:32:15.886254] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.943 [2024-05-15 13:32:15.886265] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.943 [2024-05-15 13:32:15.886274] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.943 [2024-05-15 13:32:15.886452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.943 [2024-05-15 13:32:15.886587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.943 [2024-05-15 13:32:15.887152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.943 [2024-05-15 13:32:15.887210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 [2024-05-15 13:32:16.708543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 Malloc1 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 [2024-05-15 13:32:16.897403] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:03.878 [2024-05-15 13:32:16.897744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:14:03.878 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:14:03.879 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:03.879 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.879 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:03.879 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.879 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:14:03.879 { 00:14:03.879 "aliases": [ 00:14:03.879 "16e4775c-e9c6-48f1-a0bb-3671fedf77b9" 00:14:03.879 ], 00:14:03.879 "assigned_rate_limits": { 00:14:03.879 "r_mbytes_per_sec": 0, 00:14:03.879 "rw_ios_per_sec": 0, 00:14:03.879 "rw_mbytes_per_sec": 0, 00:14:03.879 "w_mbytes_per_sec": 0 00:14:03.879 }, 00:14:03.879 "block_size": 512, 00:14:03.879 "claim_type": "exclusive_write", 00:14:03.879 "claimed": true, 00:14:03.879 "driver_specific": {}, 00:14:03.879 "memory_domains": [ 00:14:03.879 { 00:14:03.879 "dma_device_id": "system", 00:14:03.879 "dma_device_type": 1 00:14:03.879 }, 00:14:03.879 { 00:14:03.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.879 "dma_device_type": 2 00:14:03.879 } 00:14:03.879 ], 00:14:03.879 "name": "Malloc1", 00:14:03.879 "num_blocks": 1048576, 00:14:03.879 "product_name": "Malloc disk", 00:14:03.879 "supported_io_types": { 00:14:03.879 "abort": true, 00:14:03.879 "compare": false, 00:14:03.879 "compare_and_write": false, 00:14:03.879 "flush": true, 00:14:03.879 "nvme_admin": false, 00:14:03.879 "nvme_io": false, 00:14:03.879 "read": true, 00:14:03.879 "reset": true, 00:14:03.879 "unmap": true, 00:14:03.879 "write": true, 00:14:03.879 "write_zeroes": true 00:14:03.879 }, 00:14:03.879 "uuid": "16e4775c-e9c6-48f1-a0bb-3671fedf77b9", 00:14:03.879 "zoned": false 00:14:03.879 } 00:14:03.879 ]' 00:14:03.879 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:14:04.138 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:14:04.138 13:32:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:04.138 13:32:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:06.676 13:32:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:07.645 ************************************ 00:14:07.645 START TEST filesystem_ext4 00:14:07.645 ************************************ 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:07.645 mke2fs 1.46.5 (30-Dec-2021) 00:14:07.645 Discarding device blocks: 0/522240 done 00:14:07.645 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:07.645 Filesystem UUID: fe573808-845f-4e44-bbcc-22fcdda22f8e 00:14:07.645 Superblock backups stored on blocks: 00:14:07.645 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:07.645 00:14:07.645 Allocating group tables: 0/64 done 00:14:07.645 Writing inode tables: 0/64 done 00:14:07.645 Creating journal (8192 blocks): done 00:14:07.645 Writing superblocks and filesystem accounting information: 0/64 done 00:14:07.645 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:07.645 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 78737 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:07.904 00:14:07.904 real 0m0.399s 00:14:07.904 user 0m0.027s 00:14:07.904 sys 0m0.055s 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:07.904 ************************************ 00:14:07.904 END TEST filesystem_ext4 00:14:07.904 ************************************ 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:07.904 ************************************ 00:14:07.904 START TEST filesystem_btrfs 00:14:07.904 ************************************ 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:07.904 btrfs-progs v6.6.2 00:14:07.904 See https://btrfs.readthedocs.io for more information. 00:14:07.904 00:14:07.904 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:07.904 NOTE: several default settings have changed in version 5.15, please make sure 00:14:07.904 this does not affect your deployments: 00:14:07.904 - DUP for metadata (-m dup) 00:14:07.904 - enabled no-holes (-O no-holes) 00:14:07.904 - enabled free-space-tree (-R free-space-tree) 00:14:07.904 00:14:07.904 Label: (null) 00:14:07.904 UUID: 53d44933-0f3a-471d-9b12-0d9c4fb3bcb0 00:14:07.904 Node size: 16384 00:14:07.904 Sector size: 4096 00:14:07.904 Filesystem size: 510.00MiB 00:14:07.904 Block group profiles: 00:14:07.904 Data: single 8.00MiB 00:14:07.904 Metadata: DUP 32.00MiB 00:14:07.904 System: DUP 8.00MiB 00:14:07.904 SSD detected: yes 00:14:07.904 Zoned device: no 00:14:07.904 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:07.904 Runtime features: free-space-tree 00:14:07.904 Checksum: crc32c 00:14:07.904 Number of devices: 1 00:14:07.904 Devices: 00:14:07.904 ID SIZE PATH 00:14:07.904 1 510.00MiB /dev/nvme0n1p1 00:14:07.904 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:14:07.904 13:32:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 78737 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:08.162 00:14:08.162 real 0m0.223s 00:14:08.162 user 0m0.021s 00:14:08.162 sys 0m0.056s 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:08.162 ************************************ 00:14:08.162 END TEST filesystem_btrfs 00:14:08.162 ************************************ 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.162 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:08.162 ************************************ 00:14:08.162 START TEST filesystem_xfs 00:14:08.162 ************************************ 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:14:08.163 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:08.163 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:08.163 = sectsz=512 attr=2, projid32bit=1 00:14:08.163 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:08.163 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:08.163 data = bsize=4096 blocks=130560, imaxpct=25 00:14:08.163 = sunit=0 swidth=0 blks 00:14:08.163 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:08.163 log =internal log bsize=4096 blocks=16384, version=2 00:14:08.163 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:08.163 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:09.099 Discarding blocks...Done. 00:14:09.099 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:14:09.099 13:32:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 78737 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:11.636 00:14:11.636 real 0m3.229s 00:14:11.636 user 0m0.025s 00:14:11.636 sys 0m0.054s 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:11.636 ************************************ 00:14:11.636 END TEST filesystem_xfs 00:14:11.636 ************************************ 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:11.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 78737 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 78737 ']' 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 78737 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78737 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:11.636 killing process with pid 78737 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78737' 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 78737 00:14:11.636 [2024-05-15 13:32:24.546504] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:11.636 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 78737 00:14:11.894 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:11.894 00:14:11.894 real 0m9.410s 00:14:11.894 user 0m35.500s 00:14:11.894 sys 0m1.642s 00:14:11.894 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.894 13:32:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:11.894 ************************************ 00:14:11.894 END TEST nvmf_filesystem_no_in_capsule 00:14:11.894 ************************************ 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:12.152 ************************************ 00:14:12.152 START TEST nvmf_filesystem_in_capsule 00:14:12.152 ************************************ 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=79054 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 79054 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 79054 ']' 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:12.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:12.152 13:32:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.152 [2024-05-15 13:32:25.074739] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:12.152 [2024-05-15 13:32:25.074832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.152 [2024-05-15 13:32:25.198141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:12.152 [2024-05-15 13:32:25.213227] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.410 [2024-05-15 13:32:25.316192] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.410 [2024-05-15 13:32:25.316248] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.410 [2024-05-15 13:32:25.316262] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.410 [2024-05-15 13:32:25.316273] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.410 [2024-05-15 13:32:25.316283] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.410 [2024-05-15 13:32:25.316384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.410 [2024-05-15 13:32:25.316877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.410 [2024-05-15 13:32:25.316971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.410 [2024-05-15 13:32:25.316973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.343 [2024-05-15 13:32:26.176179] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.343 Malloc1 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.343 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.343 [2024-05-15 13:32:26.372888] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:13.343 [2024-05-15 13:32:26.373318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:14:13.344 { 00:14:13.344 "aliases": [ 00:14:13.344 "d410cbf2-7820-4bd2-b21c-a84afc0640a6" 00:14:13.344 ], 00:14:13.344 "assigned_rate_limits": { 00:14:13.344 "r_mbytes_per_sec": 0, 00:14:13.344 "rw_ios_per_sec": 0, 00:14:13.344 "rw_mbytes_per_sec": 0, 00:14:13.344 "w_mbytes_per_sec": 0 00:14:13.344 }, 00:14:13.344 "block_size": 512, 00:14:13.344 "claim_type": "exclusive_write", 00:14:13.344 "claimed": true, 00:14:13.344 "driver_specific": {}, 00:14:13.344 "memory_domains": [ 00:14:13.344 { 00:14:13.344 "dma_device_id": "system", 00:14:13.344 "dma_device_type": 1 00:14:13.344 }, 00:14:13.344 { 00:14:13.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.344 "dma_device_type": 2 00:14:13.344 } 00:14:13.344 ], 00:14:13.344 "name": "Malloc1", 00:14:13.344 "num_blocks": 1048576, 00:14:13.344 "product_name": "Malloc disk", 00:14:13.344 "supported_io_types": { 00:14:13.344 "abort": true, 00:14:13.344 "compare": false, 00:14:13.344 "compare_and_write": false, 00:14:13.344 "flush": true, 00:14:13.344 "nvme_admin": false, 00:14:13.344 "nvme_io": false, 00:14:13.344 "read": true, 00:14:13.344 "reset": true, 00:14:13.344 "unmap": true, 00:14:13.344 "write": true, 00:14:13.344 "write_zeroes": true 00:14:13.344 }, 00:14:13.344 "uuid": "d410cbf2-7820-4bd2-b21c-a84afc0640a6", 00:14:13.344 "zoned": false 00:14:13.344 } 00:14:13.344 ]' 00:14:13.344 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:14:13.602 13:32:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:16.197 13:32:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:16.762 ************************************ 00:14:16.762 START TEST filesystem_in_capsule_ext4 00:14:16.762 ************************************ 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:16.762 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:14:16.763 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:14:16.763 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:14:16.763 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:14:16.763 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:16.763 mke2fs 1.46.5 (30-Dec-2021) 00:14:17.021 Discarding device blocks: 0/522240 done 00:14:17.021 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:17.021 Filesystem UUID: 6621ae73-488e-4ed6-b923-14b11dd30f22 00:14:17.021 Superblock backups stored on blocks: 00:14:17.021 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:17.021 00:14:17.021 Allocating group tables: 0/64 done 00:14:17.021 Writing inode tables: 0/64 done 00:14:17.021 Creating journal (8192 blocks): done 00:14:17.021 Writing superblocks and filesystem accounting information: 0/64 done 00:14:17.021 00:14:17.021 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:14:17.021 13:32:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:17.021 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 79054 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:17.278 ************************************ 00:14:17.278 END TEST filesystem_in_capsule_ext4 00:14:17.278 ************************************ 00:14:17.278 00:14:17.278 real 0m0.350s 00:14:17.278 user 0m0.029s 00:14:17.278 sys 0m0.047s 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:17.278 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.279 ************************************ 00:14:17.279 START TEST filesystem_in_capsule_btrfs 00:14:17.279 ************************************ 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:14:17.279 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:17.537 btrfs-progs v6.6.2 00:14:17.537 See https://btrfs.readthedocs.io for more information. 00:14:17.537 00:14:17.537 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:17.537 NOTE: several default settings have changed in version 5.15, please make sure 00:14:17.537 this does not affect your deployments: 00:14:17.537 - DUP for metadata (-m dup) 00:14:17.537 - enabled no-holes (-O no-holes) 00:14:17.537 - enabled free-space-tree (-R free-space-tree) 00:14:17.537 00:14:17.537 Label: (null) 00:14:17.537 UUID: 3cd2f451-212d-4eb7-b3a1-67159e15634c 00:14:17.537 Node size: 16384 00:14:17.537 Sector size: 4096 00:14:17.537 Filesystem size: 510.00MiB 00:14:17.537 Block group profiles: 00:14:17.537 Data: single 8.00MiB 00:14:17.537 Metadata: DUP 32.00MiB 00:14:17.537 System: DUP 8.00MiB 00:14:17.537 SSD detected: yes 00:14:17.537 Zoned device: no 00:14:17.537 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:17.537 Runtime features: free-space-tree 00:14:17.537 Checksum: crc32c 00:14:17.537 Number of devices: 1 00:14:17.537 Devices: 00:14:17.537 ID SIZE PATH 00:14:17.537 1 510.00MiB /dev/nvme0n1p1 00:14:17.537 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 79054 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:17.537 00:14:17.537 real 0m0.230s 00:14:17.537 user 0m0.026s 00:14:17.537 sys 0m0.060s 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:17.537 ************************************ 00:14:17.537 END TEST filesystem_in_capsule_btrfs 00:14:17.537 ************************************ 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.537 ************************************ 00:14:17.537 START TEST filesystem_in_capsule_xfs 00:14:17.537 ************************************ 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:14:17.537 13:32:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:17.537 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:17.537 = sectsz=512 attr=2, projid32bit=1 00:14:17.537 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:17.537 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:17.537 data = bsize=4096 blocks=130560, imaxpct=25 00:14:17.537 = sunit=0 swidth=0 blks 00:14:17.537 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:17.537 log =internal log bsize=4096 blocks=16384, version=2 00:14:17.537 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:17.537 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:18.469 Discarding blocks...Done. 00:14:18.469 13:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:14:18.469 13:32:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:20.371 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:20.371 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:20.371 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:20.371 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:20.371 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 79054 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:20.372 00:14:20.372 real 0m2.630s 00:14:20.372 user 0m0.024s 00:14:20.372 sys 0m0.051s 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:20.372 ************************************ 00:14:20.372 END TEST filesystem_in_capsule_xfs 00:14:20.372 ************************************ 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 79054 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 79054 ']' 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 79054 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79054 00:14:20.372 killing process with pid 79054 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79054' 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 79054 00:14:20.372 [2024-05-15 13:32:33.329377] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:20.372 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 79054 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:20.938 00:14:20.938 real 0m8.731s 00:14:20.938 user 0m33.044s 00:14:20.938 sys 0m1.552s 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:20.938 ************************************ 00:14:20.938 END TEST nvmf_filesystem_in_capsule 00:14:20.938 ************************************ 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.938 rmmod nvme_tcp 00:14:20.938 rmmod nvme_fabrics 00:14:20.938 rmmod nvme_keyring 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:20.938 00:14:20.938 real 0m18.994s 00:14:20.938 user 1m8.776s 00:14:20.938 sys 0m3.589s 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:20.938 ************************************ 00:14:20.938 END TEST nvmf_filesystem 00:14:20.938 ************************************ 00:14:20.938 13:32:33 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:20.938 13:32:33 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:20.938 13:32:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:20.938 13:32:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:20.938 13:32:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:20.938 ************************************ 00:14:20.938 START TEST nvmf_target_discovery 00:14:20.938 ************************************ 00:14:20.938 13:32:33 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:20.938 * Looking for test storage... 00:14:21.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.197 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:21.198 Cannot find device "nvmf_tgt_br" 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.198 Cannot find device "nvmf_tgt_br2" 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:21.198 Cannot find device "nvmf_tgt_br" 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:21.198 Cannot find device "nvmf_tgt_br2" 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:21.198 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:21.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:21.457 00:14:21.457 --- 10.0.0.2 ping statistics --- 00:14:21.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.457 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:21.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:14:21.457 00:14:21.457 --- 10.0.0.3 ping statistics --- 00:14:21.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.457 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:21.457 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:21.457 00:14:21.457 --- 10.0.0.1 ping statistics --- 00:14:21.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.457 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=79507 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 79507 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 79507 ']' 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.458 13:32:34 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:21.458 [2024-05-15 13:32:34.469794] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:21.458 [2024-05-15 13:32:34.469903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.716 [2024-05-15 13:32:34.594460] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:21.716 [2024-05-15 13:32:34.614267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.716 [2024-05-15 13:32:34.723182] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.716 [2024-05-15 13:32:34.723400] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.716 [2024-05-15 13:32:34.723672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.716 [2024-05-15 13:32:34.723829] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.717 [2024-05-15 13:32:34.723965] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.717 [2024-05-15 13:32:34.725756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.717 [2024-05-15 13:32:34.725860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.717 [2024-05-15 13:32:34.725915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.717 [2024-05-15 13:32:34.725910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 [2024-05-15 13:32:35.555705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 Null1 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 [2024-05-15 13:32:35.606255] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:22.652 [2024-05-15 13:32:35.606522] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 Null2 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 Null3 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:22.652 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 Null4 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.653 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 4420 00:14:22.912 00:14:22.912 Discovery Log Number of Records 6, Generation counter 6 00:14:22.912 =====Discovery Log Entry 0====== 00:14:22.912 trtype: tcp 00:14:22.912 adrfam: ipv4 00:14:22.912 subtype: current discovery subsystem 00:14:22.912 treq: not required 00:14:22.912 portid: 0 00:14:22.912 trsvcid: 4420 00:14:22.912 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:22.912 traddr: 10.0.0.2 00:14:22.912 eflags: explicit discovery connections, duplicate discovery information 00:14:22.912 sectype: none 00:14:22.912 =====Discovery Log Entry 1====== 00:14:22.912 trtype: tcp 00:14:22.912 adrfam: ipv4 00:14:22.912 subtype: nvme subsystem 00:14:22.912 treq: not required 00:14:22.912 portid: 0 00:14:22.912 trsvcid: 4420 00:14:22.912 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:22.912 traddr: 10.0.0.2 00:14:22.912 eflags: none 00:14:22.912 sectype: none 00:14:22.912 =====Discovery Log Entry 2====== 00:14:22.912 trtype: tcp 00:14:22.912 adrfam: ipv4 00:14:22.912 subtype: nvme subsystem 00:14:22.912 treq: not required 00:14:22.912 portid: 0 00:14:22.912 trsvcid: 4420 00:14:22.912 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:22.912 traddr: 10.0.0.2 00:14:22.912 eflags: none 00:14:22.912 sectype: none 00:14:22.912 =====Discovery Log Entry 3====== 00:14:22.912 trtype: tcp 00:14:22.912 adrfam: ipv4 00:14:22.912 subtype: nvme subsystem 00:14:22.912 treq: not required 00:14:22.912 portid: 0 00:14:22.912 trsvcid: 4420 00:14:22.912 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:22.912 traddr: 10.0.0.2 00:14:22.912 eflags: none 00:14:22.912 sectype: none 00:14:22.912 =====Discovery Log Entry 4====== 00:14:22.912 trtype: tcp 00:14:22.912 adrfam: ipv4 00:14:22.912 subtype: nvme subsystem 00:14:22.912 treq: not required 00:14:22.912 portid: 0 00:14:22.912 trsvcid: 4420 00:14:22.912 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:22.912 traddr: 10.0.0.2 00:14:22.912 eflags: none 00:14:22.912 sectype: none 00:14:22.912 =====Discovery Log Entry 5====== 00:14:22.912 trtype: tcp 00:14:22.912 adrfam: ipv4 00:14:22.912 subtype: discovery subsystem referral 00:14:22.912 treq: not required 00:14:22.912 portid: 0 00:14:22.912 trsvcid: 4430 00:14:22.912 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:22.912 traddr: 10.0.0.2 00:14:22.912 eflags: none 00:14:22.912 sectype: none 00:14:22.912 Perform nvmf subsystem discovery via RPC 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.912 [ 00:14:22.912 { 00:14:22.912 "allow_any_host": true, 00:14:22.912 "hosts": [], 00:14:22.912 "listen_addresses": [ 00:14:22.912 { 00:14:22.912 "adrfam": "IPv4", 00:14:22.912 "traddr": "10.0.0.2", 00:14:22.912 "trsvcid": "4420", 00:14:22.912 "trtype": "TCP" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:22.912 "subtype": "Discovery" 00:14:22.912 }, 00:14:22.912 { 00:14:22.912 "allow_any_host": true, 00:14:22.912 "hosts": [], 00:14:22.912 "listen_addresses": [ 00:14:22.912 { 00:14:22.912 "adrfam": "IPv4", 00:14:22.912 "traddr": "10.0.0.2", 00:14:22.912 "trsvcid": "4420", 00:14:22.912 "trtype": "TCP" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "max_cntlid": 65519, 00:14:22.912 "max_namespaces": 32, 00:14:22.912 "min_cntlid": 1, 00:14:22.912 "model_number": "SPDK bdev Controller", 00:14:22.912 "namespaces": [ 00:14:22.912 { 00:14:22.912 "bdev_name": "Null1", 00:14:22.912 "name": "Null1", 00:14:22.912 "nguid": "143CEE2DD76F45FFA94DB1F6A38CF4E6", 00:14:22.912 "nsid": 1, 00:14:22.912 "uuid": "143cee2d-d76f-45ff-a94d-b1f6a38cf4e6" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:22.912 "serial_number": "SPDK00000000000001", 00:14:22.912 "subtype": "NVMe" 00:14:22.912 }, 00:14:22.912 { 00:14:22.912 "allow_any_host": true, 00:14:22.912 "hosts": [], 00:14:22.912 "listen_addresses": [ 00:14:22.912 { 00:14:22.912 "adrfam": "IPv4", 00:14:22.912 "traddr": "10.0.0.2", 00:14:22.912 "trsvcid": "4420", 00:14:22.912 "trtype": "TCP" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "max_cntlid": 65519, 00:14:22.912 "max_namespaces": 32, 00:14:22.912 "min_cntlid": 1, 00:14:22.912 "model_number": "SPDK bdev Controller", 00:14:22.912 "namespaces": [ 00:14:22.912 { 00:14:22.912 "bdev_name": "Null2", 00:14:22.912 "name": "Null2", 00:14:22.912 "nguid": "42DE8B0B6F8546BB87C0F68A70D3F8E3", 00:14:22.912 "nsid": 1, 00:14:22.912 "uuid": "42de8b0b-6f85-46bb-87c0-f68a70d3f8e3" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:22.912 "serial_number": "SPDK00000000000002", 00:14:22.912 "subtype": "NVMe" 00:14:22.912 }, 00:14:22.912 { 00:14:22.912 "allow_any_host": true, 00:14:22.912 "hosts": [], 00:14:22.912 "listen_addresses": [ 00:14:22.912 { 00:14:22.912 "adrfam": "IPv4", 00:14:22.912 "traddr": "10.0.0.2", 00:14:22.912 "trsvcid": "4420", 00:14:22.912 "trtype": "TCP" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "max_cntlid": 65519, 00:14:22.912 "max_namespaces": 32, 00:14:22.912 "min_cntlid": 1, 00:14:22.912 "model_number": "SPDK bdev Controller", 00:14:22.912 "namespaces": [ 00:14:22.912 { 00:14:22.912 "bdev_name": "Null3", 00:14:22.912 "name": "Null3", 00:14:22.912 "nguid": "020F947BCCD540F4AF0F8B66CE087F9A", 00:14:22.912 "nsid": 1, 00:14:22.912 "uuid": "020f947b-ccd5-40f4-af0f-8b66ce087f9a" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:22.912 "serial_number": "SPDK00000000000003", 00:14:22.912 "subtype": "NVMe" 00:14:22.912 }, 00:14:22.912 { 00:14:22.912 "allow_any_host": true, 00:14:22.912 "hosts": [], 00:14:22.912 "listen_addresses": [ 00:14:22.912 { 00:14:22.912 "adrfam": "IPv4", 00:14:22.912 "traddr": "10.0.0.2", 00:14:22.912 "trsvcid": "4420", 00:14:22.912 "trtype": "TCP" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "max_cntlid": 65519, 00:14:22.912 "max_namespaces": 32, 00:14:22.912 "min_cntlid": 1, 00:14:22.912 "model_number": "SPDK bdev Controller", 00:14:22.912 "namespaces": [ 00:14:22.912 { 00:14:22.912 "bdev_name": "Null4", 00:14:22.912 "name": "Null4", 00:14:22.912 "nguid": "816EC0CF02844B0FB13A88001455D6E4", 00:14:22.912 "nsid": 1, 00:14:22.912 "uuid": "816ec0cf-0284-4b0f-b13a-88001455d6e4" 00:14:22.912 } 00:14:22.912 ], 00:14:22.912 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:22.912 "serial_number": "SPDK00000000000004", 00:14:22.912 "subtype": "NVMe" 00:14:22.912 } 00:14:22.912 ] 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.912 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.913 13:32:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.913 rmmod nvme_tcp 00:14:22.913 rmmod nvme_fabrics 00:14:23.171 rmmod nvme_keyring 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 79507 ']' 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 79507 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 79507 ']' 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 79507 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79507 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:23.171 killing process with pid 79507 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79507' 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 79507 00:14:23.171 [2024-05-15 13:32:36.063726] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:23.171 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 79507 00:14:23.429 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:23.429 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:23.429 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:23.429 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.429 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:23.429 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.429 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.430 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.430 13:32:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:23.430 00:14:23.430 real 0m2.363s 00:14:23.430 user 0m6.570s 00:14:23.430 sys 0m0.625s 00:14:23.430 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:23.430 13:32:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:23.430 ************************************ 00:14:23.430 END TEST nvmf_target_discovery 00:14:23.430 ************************************ 00:14:23.430 13:32:36 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:23.430 13:32:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:23.430 13:32:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.430 13:32:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.430 ************************************ 00:14:23.430 START TEST nvmf_referrals 00:14:23.430 ************************************ 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:23.430 * Looking for test storage... 00:14:23.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:23.430 Cannot find device "nvmf_tgt_br" 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:23.430 Cannot find device "nvmf_tgt_br2" 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:23.430 Cannot find device "nvmf_tgt_br" 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:14:23.430 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:23.430 Cannot find device "nvmf_tgt_br2" 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:23.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:23.689 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:23.689 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:23.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.108 ms 00:14:23.947 00:14:23.947 --- 10.0.0.2 ping statistics --- 00:14:23.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.947 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:23.947 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:23.947 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:14:23.947 00:14:23.947 --- 10.0.0.3 ping statistics --- 00:14:23.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.947 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:23.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:23.947 00:14:23.947 --- 10.0.0.1 ping statistics --- 00:14:23.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.947 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=79735 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 79735 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 79735 ']' 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.947 13:32:36 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:23.947 [2024-05-15 13:32:36.917129] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:23.947 [2024-05-15 13:32:36.917208] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.947 [2024-05-15 13:32:37.037882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:24.205 [2024-05-15 13:32:37.054288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:24.205 [2024-05-15 13:32:37.148493] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:24.205 [2024-05-15 13:32:37.148545] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:24.206 [2024-05-15 13:32:37.148557] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:24.206 [2024-05-15 13:32:37.148566] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:24.206 [2024-05-15 13:32:37.148574] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:24.206 [2024-05-15 13:32:37.148698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:24.206 [2024-05-15 13:32:37.148843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:24.206 [2024-05-15 13:32:37.149539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:24.206 [2024-05-15 13:32:37.149568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 [2024-05-15 13:32:37.966788] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 [2024-05-15 13:32:37.989200] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:25.141 [2024-05-15 13:32:37.989525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:37 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.141 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.399 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:25.400 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:25.659 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:25.915 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:25.916 13:32:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:26.173 rmmod nvme_tcp 00:14:26.173 rmmod nvme_fabrics 00:14:26.173 rmmod nvme_keyring 00:14:26.173 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 79735 ']' 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 79735 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 79735 ']' 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 79735 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79735 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:26.431 killing process with pid 79735 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79735' 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 79735 00:14:26.431 [2024-05-15 13:32:39.298481] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 79735 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.431 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.690 13:32:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:26.690 00:14:26.690 real 0m3.199s 00:14:26.690 user 0m10.436s 00:14:26.690 sys 0m0.862s 00:14:26.690 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:26.690 13:32:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:26.690 ************************************ 00:14:26.690 END TEST nvmf_referrals 00:14:26.690 ************************************ 00:14:26.690 13:32:39 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:26.690 13:32:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:26.690 13:32:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:26.690 13:32:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:26.690 ************************************ 00:14:26.690 START TEST nvmf_connect_disconnect 00:14:26.690 ************************************ 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:26.690 * Looking for test storage... 00:14:26.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:26.690 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:26.691 Cannot find device "nvmf_tgt_br" 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:26.691 Cannot find device "nvmf_tgt_br2" 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:26.691 Cannot find device "nvmf_tgt_br" 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:26.691 Cannot find device "nvmf_tgt_br2" 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:14:26.691 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:26.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:26.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:26.949 13:32:39 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:26.949 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:26.949 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:26.949 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:26.949 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:26.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:26.949 00:14:26.949 --- 10.0.0.2 ping statistics --- 00:14:26.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.949 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:26.949 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:26.949 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:26.949 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:14:26.949 00:14:26.949 --- 10.0.0.3 ping statistics --- 00:14:26.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.949 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:26.949 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:27.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:27.208 00:14:27.208 --- 10.0.0.1 ping statistics --- 00:14:27.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.208 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=80043 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 80043 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 80043 ']' 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.208 13:32:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:27.208 [2024-05-15 13:32:40.136865] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:14:27.208 [2024-05-15 13:32:40.136963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.208 [2024-05-15 13:32:40.259045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:27.208 [2024-05-15 13:32:40.276165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:27.465 [2024-05-15 13:32:40.377643] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.465 [2024-05-15 13:32:40.377702] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.465 [2024-05-15 13:32:40.377714] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:27.465 [2024-05-15 13:32:40.377723] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:27.465 [2024-05-15 13:32:40.377730] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.465 [2024-05-15 13:32:40.377820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:27.465 [2024-05-15 13:32:40.377899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.465 [2024-05-15 13:32:40.378662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:27.465 [2024-05-15 13:32:40.378669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:28.400 [2024-05-15 13:32:41.205309] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.400 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:28.400 [2024-05-15 13:32:41.276761] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:28.401 [2024-05-15 13:32:41.277050] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.401 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.401 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:14:28.401 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:14:28.401 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:14:28.401 13:32:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:30.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.206 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.000 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:19.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.098 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:30.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:02.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:08.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:11.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:13.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:17.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.100 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.535 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:33.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:37.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:39.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:46.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:48.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:55.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:04.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:10.806 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:10.806 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:10.806 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:10.806 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:11.065 rmmod nvme_tcp 00:18:11.065 rmmod nvme_fabrics 00:18:11.065 rmmod nvme_keyring 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 80043 ']' 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 80043 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 80043 ']' 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 80043 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80043 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:11.065 killing process with pid 80043 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80043' 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 80043 00:18:11.065 [2024-05-15 13:36:23.984745] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:11.065 13:36:23 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 80043 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:11.357 00:18:11.357 real 3m44.647s 00:18:11.357 user 14m31.275s 00:18:11.357 sys 0m25.122s 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:11.357 13:36:24 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:11.357 ************************************ 00:18:11.357 END TEST nvmf_connect_disconnect 00:18:11.357 ************************************ 00:18:11.357 13:36:24 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:11.357 13:36:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:11.358 13:36:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:11.358 13:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:11.358 ************************************ 00:18:11.358 START TEST nvmf_multitarget 00:18:11.358 ************************************ 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:18:11.358 * Looking for test storage... 00:18:11.358 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:11.358 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:11.661 Cannot find device "nvmf_tgt_br" 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.661 Cannot find device "nvmf_tgt_br2" 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:11.661 Cannot find device "nvmf_tgt_br" 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:11.661 Cannot find device "nvmf_tgt_br2" 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.661 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:11.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:18:11.661 00:18:11.661 --- 10.0.0.2 ping statistics --- 00:18:11.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.661 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:11.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:11.661 00:18:11.661 --- 10.0.0.3 ping statistics --- 00:18:11.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.661 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:11.661 00:18:11.661 --- 10.0.0.1 ping statistics --- 00:18:11.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.661 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.661 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=83812 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 83812 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 83812 ']' 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.919 13:36:24 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:11.919 [2024-05-15 13:36:24.841521] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:11.919 [2024-05-15 13:36:24.841675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.919 [2024-05-15 13:36:24.968586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:11.919 [2024-05-15 13:36:24.981232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:12.177 [2024-05-15 13:36:25.112878] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.177 [2024-05-15 13:36:25.112972] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.177 [2024-05-15 13:36:25.112985] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.177 [2024-05-15 13:36:25.112994] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.177 [2024-05-15 13:36:25.113002] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.177 [2024-05-15 13:36:25.113170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.177 [2024-05-15 13:36:25.113303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.177 [2024-05-15 13:36:25.113943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.177 [2024-05-15 13:36:25.113998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.743 13:36:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.743 13:36:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:18:12.743 13:36:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.743 13:36:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.743 13:36:25 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:12.999 13:36:25 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.999 13:36:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:12.999 13:36:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:12.999 13:36:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:12.999 13:36:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:12.999 13:36:25 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:13.255 "nvmf_tgt_1" 00:18:13.255 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:13.255 "nvmf_tgt_2" 00:18:13.255 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:13.255 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:13.513 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:13.513 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:13.513 true 00:18:13.513 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:13.769 true 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:13.769 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:13.769 rmmod nvme_tcp 00:18:13.769 rmmod nvme_fabrics 00:18:13.769 rmmod nvme_keyring 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 83812 ']' 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 83812 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 83812 ']' 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 83812 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83812 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:14.026 killing process with pid 83812 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83812' 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 83812 00:18:14.026 13:36:26 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 83812 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:14.284 ************************************ 00:18:14.284 END TEST nvmf_multitarget 00:18:14.284 ************************************ 00:18:14.284 00:18:14.284 real 0m2.969s 00:18:14.284 user 0m9.516s 00:18:14.284 sys 0m0.773s 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:14.284 13:36:27 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:14.284 13:36:27 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:14.284 13:36:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:14.284 13:36:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:14.284 13:36:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.284 ************************************ 00:18:14.284 START TEST nvmf_rpc 00:18:14.284 ************************************ 00:18:14.284 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:18:14.543 * Looking for test storage... 00:18:14.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:14.543 Cannot find device "nvmf_tgt_br" 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.543 Cannot find device "nvmf_tgt_br2" 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:14.543 Cannot find device "nvmf_tgt_br" 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:14.543 Cannot find device "nvmf_tgt_br2" 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:14.543 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:14.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:14.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:18:14.801 00:18:14.801 --- 10.0.0.2 ping statistics --- 00:18:14.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.801 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:14.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:14.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:14.801 00:18:14.801 --- 10.0.0.3 ping statistics --- 00:18:14.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.801 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:14.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:14.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:14.801 00:18:14.801 --- 10.0.0.1 ping statistics --- 00:18:14.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:14.801 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=84045 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 84045 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 84045 ']' 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:14.801 13:36:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:14.801 [2024-05-15 13:36:27.889478] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:14.801 [2024-05-15 13:36:27.889572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.059 [2024-05-15 13:36:28.010229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:15.059 [2024-05-15 13:36:28.029461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:15.059 [2024-05-15 13:36:28.129026] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.059 [2024-05-15 13:36:28.129304] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.059 [2024-05-15 13:36:28.129467] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.059 [2024-05-15 13:36:28.129730] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.059 [2024-05-15 13:36:28.129905] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.059 [2024-05-15 13:36:28.130058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.059 [2024-05-15 13:36:28.130181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.059 [2024-05-15 13:36:28.130237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:15.059 [2024-05-15 13:36:28.130241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.019 13:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:16.019 "poll_groups": [ 00:18:16.019 { 00:18:16.019 "admin_qpairs": 0, 00:18:16.019 "completed_nvme_io": 0, 00:18:16.019 "current_admin_qpairs": 0, 00:18:16.019 "current_io_qpairs": 0, 00:18:16.019 "io_qpairs": 0, 00:18:16.019 "name": "nvmf_tgt_poll_group_000", 00:18:16.019 "pending_bdev_io": 0, 00:18:16.019 "transports": [] 00:18:16.019 }, 00:18:16.019 { 00:18:16.019 "admin_qpairs": 0, 00:18:16.019 "completed_nvme_io": 0, 00:18:16.019 "current_admin_qpairs": 0, 00:18:16.019 "current_io_qpairs": 0, 00:18:16.019 "io_qpairs": 0, 00:18:16.019 "name": "nvmf_tgt_poll_group_001", 00:18:16.019 "pending_bdev_io": 0, 00:18:16.019 "transports": [] 00:18:16.019 }, 00:18:16.019 { 00:18:16.019 "admin_qpairs": 0, 00:18:16.019 "completed_nvme_io": 0, 00:18:16.019 "current_admin_qpairs": 0, 00:18:16.019 "current_io_qpairs": 0, 00:18:16.019 "io_qpairs": 0, 00:18:16.019 "name": "nvmf_tgt_poll_group_002", 00:18:16.019 "pending_bdev_io": 0, 00:18:16.019 "transports": [] 00:18:16.019 }, 00:18:16.019 { 00:18:16.020 "admin_qpairs": 0, 00:18:16.020 "completed_nvme_io": 0, 00:18:16.020 "current_admin_qpairs": 0, 00:18:16.020 "current_io_qpairs": 0, 00:18:16.020 "io_qpairs": 0, 00:18:16.020 "name": "nvmf_tgt_poll_group_003", 00:18:16.020 "pending_bdev_io": 0, 00:18:16.020 "transports": [] 00:18:16.020 } 00:18:16.020 ], 00:18:16.020 "tick_rate": 2200000000 00:18:16.020 }' 00:18:16.020 13:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:16.020 13:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:16.020 13:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:16.020 13:36:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.020 [2024-05-15 13:36:29.073480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.020 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.276 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.276 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:16.276 "poll_groups": [ 00:18:16.276 { 00:18:16.276 "admin_qpairs": 0, 00:18:16.276 "completed_nvme_io": 0, 00:18:16.276 "current_admin_qpairs": 0, 00:18:16.276 "current_io_qpairs": 0, 00:18:16.276 "io_qpairs": 0, 00:18:16.276 "name": "nvmf_tgt_poll_group_000", 00:18:16.276 "pending_bdev_io": 0, 00:18:16.276 "transports": [ 00:18:16.276 { 00:18:16.276 "trtype": "TCP" 00:18:16.276 } 00:18:16.276 ] 00:18:16.276 }, 00:18:16.276 { 00:18:16.276 "admin_qpairs": 0, 00:18:16.276 "completed_nvme_io": 0, 00:18:16.276 "current_admin_qpairs": 0, 00:18:16.276 "current_io_qpairs": 0, 00:18:16.276 "io_qpairs": 0, 00:18:16.276 "name": "nvmf_tgt_poll_group_001", 00:18:16.276 "pending_bdev_io": 0, 00:18:16.276 "transports": [ 00:18:16.276 { 00:18:16.276 "trtype": "TCP" 00:18:16.276 } 00:18:16.276 ] 00:18:16.276 }, 00:18:16.276 { 00:18:16.276 "admin_qpairs": 0, 00:18:16.276 "completed_nvme_io": 0, 00:18:16.276 "current_admin_qpairs": 0, 00:18:16.276 "current_io_qpairs": 0, 00:18:16.276 "io_qpairs": 0, 00:18:16.276 "name": "nvmf_tgt_poll_group_002", 00:18:16.276 "pending_bdev_io": 0, 00:18:16.276 "transports": [ 00:18:16.276 { 00:18:16.276 "trtype": "TCP" 00:18:16.276 } 00:18:16.276 ] 00:18:16.276 }, 00:18:16.276 { 00:18:16.276 "admin_qpairs": 0, 00:18:16.277 "completed_nvme_io": 0, 00:18:16.277 "current_admin_qpairs": 0, 00:18:16.277 "current_io_qpairs": 0, 00:18:16.277 "io_qpairs": 0, 00:18:16.277 "name": "nvmf_tgt_poll_group_003", 00:18:16.277 "pending_bdev_io": 0, 00:18:16.277 "transports": [ 00:18:16.277 { 00:18:16.277 "trtype": "TCP" 00:18:16.277 } 00:18:16.277 ] 00:18:16.277 } 00:18:16.277 ], 00:18:16.277 "tick_rate": 2200000000 00:18:16.277 }' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.277 Malloc1 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.277 [2024-05-15 13:36:29.286167] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:16.277 [2024-05-15 13:36:29.286702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -a 10.0.0.2 -s 4420 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -a 10.0.0.2 -s 4420 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -a 10.0.0.2 -s 4420 00:18:16.277 [2024-05-15 13:36:29.314861] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd' 00:18:16.277 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:16.277 could not add new controller: failed to write to nvme-fabrics device 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.277 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:16.534 13:36:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:16.535 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:18:16.535 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.535 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:16.535 13:36:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:18:18.431 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:18.431 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:18.431 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:18.431 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:18.431 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:18.431 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:18:18.431 13:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:18.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.689 [2024-05-15 13:36:31.715853] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd' 00:18:18.689 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:18.689 could not add new controller: failed to write to nvme-fabrics device 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.689 13:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.946 13:36:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:18.946 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:18:18.946 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.947 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:18.947 13:36:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:18:20.846 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:20.846 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:20.846 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.846 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:20.846 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.846 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:18:20.846 13:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:21.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.104 13:36:33 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.104 [2024-05-15 13:36:34.014154] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:21.104 13:36:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:23.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.632 [2024-05-15 13:36:36.301166] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.632 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:23.633 13:36:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:18:25.533 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:25.533 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:25.533 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:25.533 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:25.533 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.533 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:18:25.533 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:25.791 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.791 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.792 [2024-05-15 13:36:38.700591] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:25.792 13:36:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:18:28.321 13:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:28.321 13:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:28.321 13:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:28.321 13:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:28.321 13:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.321 13:36:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:18:28.321 13:36:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.321 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 [2024-05-15 13:36:41.091884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:28.322 13:36:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:18:30.220 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:30.220 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:30.220 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.220 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:30.220 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.220 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:18:30.220 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:30.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.479 [2024-05-15 13:36:43.391076] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:30.479 13:36:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:33.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.009 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 [2024-05-15 13:36:45.695761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 [2024-05-15 13:36:45.743844] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 [2024-05-15 13:36:45.791960] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 [2024-05-15 13:36:45.840006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 [2024-05-15 13:36:45.888049] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.010 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:33.011 "poll_groups": [ 00:18:33.011 { 00:18:33.011 "admin_qpairs": 2, 00:18:33.011 "completed_nvme_io": 66, 00:18:33.011 "current_admin_qpairs": 0, 00:18:33.011 "current_io_qpairs": 0, 00:18:33.011 "io_qpairs": 16, 00:18:33.011 "name": "nvmf_tgt_poll_group_000", 00:18:33.011 "pending_bdev_io": 0, 00:18:33.011 "transports": [ 00:18:33.011 { 00:18:33.011 "trtype": "TCP" 00:18:33.011 } 00:18:33.011 ] 00:18:33.011 }, 00:18:33.011 { 00:18:33.011 "admin_qpairs": 3, 00:18:33.011 "completed_nvme_io": 67, 00:18:33.011 "current_admin_qpairs": 0, 00:18:33.011 "current_io_qpairs": 0, 00:18:33.011 "io_qpairs": 17, 00:18:33.011 "name": "nvmf_tgt_poll_group_001", 00:18:33.011 "pending_bdev_io": 0, 00:18:33.011 "transports": [ 00:18:33.011 { 00:18:33.011 "trtype": "TCP" 00:18:33.011 } 00:18:33.011 ] 00:18:33.011 }, 00:18:33.011 { 00:18:33.011 "admin_qpairs": 1, 00:18:33.011 "completed_nvme_io": 120, 00:18:33.011 "current_admin_qpairs": 0, 00:18:33.011 "current_io_qpairs": 0, 00:18:33.011 "io_qpairs": 19, 00:18:33.011 "name": "nvmf_tgt_poll_group_002", 00:18:33.011 "pending_bdev_io": 0, 00:18:33.011 "transports": [ 00:18:33.011 { 00:18:33.011 "trtype": "TCP" 00:18:33.011 } 00:18:33.011 ] 00:18:33.011 }, 00:18:33.011 { 00:18:33.011 "admin_qpairs": 1, 00:18:33.011 "completed_nvme_io": 167, 00:18:33.011 "current_admin_qpairs": 0, 00:18:33.011 "current_io_qpairs": 0, 00:18:33.011 "io_qpairs": 18, 00:18:33.011 "name": "nvmf_tgt_poll_group_003", 00:18:33.011 "pending_bdev_io": 0, 00:18:33.011 "transports": [ 00:18:33.011 { 00:18:33.011 "trtype": "TCP" 00:18:33.011 } 00:18:33.011 ] 00:18:33.011 } 00:18:33.011 ], 00:18:33.011 "tick_rate": 2200000000 00:18:33.011 }' 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:33.011 13:36:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:33.011 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:33.011 rmmod nvme_tcp 00:18:33.011 rmmod nvme_fabrics 00:18:33.269 rmmod nvme_keyring 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 84045 ']' 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 84045 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 84045 ']' 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 84045 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84045 00:18:33.269 killing process with pid 84045 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84045' 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 84045 00:18:33.269 [2024-05-15 13:36:46.172228] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:33.269 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 84045 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:33.528 00:18:33.528 real 0m19.122s 00:18:33.528 user 1m11.974s 00:18:33.528 sys 0m2.475s 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:33.528 13:36:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.528 ************************************ 00:18:33.528 END TEST nvmf_rpc 00:18:33.528 ************************************ 00:18:33.528 13:36:46 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:33.528 13:36:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:33.528 13:36:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:33.528 13:36:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:33.528 ************************************ 00:18:33.528 START TEST nvmf_invalid 00:18:33.528 ************************************ 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:33.528 * Looking for test storage... 00:18:33.528 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:33.528 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:33.529 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:33.787 Cannot find device "nvmf_tgt_br" 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:33.787 Cannot find device "nvmf_tgt_br2" 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:33.787 Cannot find device "nvmf_tgt_br" 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:33.787 Cannot find device "nvmf_tgt_br2" 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:33.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:33.787 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:33.787 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:34.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:18:34.045 00:18:34.045 --- 10.0.0.2 ping statistics --- 00:18:34.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.045 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:34.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:34.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:18:34.045 00:18:34.045 --- 10.0.0.3 ping statistics --- 00:18:34.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.045 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:34.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:18:34.045 00:18:34.045 --- 10.0.0.1 ping statistics --- 00:18:34.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.045 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=84553 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 84553 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 84553 ']' 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:34.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:34.045 13:36:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:34.045 [2024-05-15 13:36:47.016358] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:34.045 [2024-05-15 13:36:47.016446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.045 [2024-05-15 13:36:47.136293] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:34.304 [2024-05-15 13:36:47.150777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.304 [2024-05-15 13:36:47.252634] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.304 [2024-05-15 13:36:47.252710] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.304 [2024-05-15 13:36:47.252722] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.304 [2024-05-15 13:36:47.252732] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.304 [2024-05-15 13:36:47.252739] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.304 [2024-05-15 13:36:47.252937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.304 [2024-05-15 13:36:47.253057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.304 [2024-05-15 13:36:47.253635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.304 [2024-05-15 13:36:47.253655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.240 13:36:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:35.240 13:36:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:18:35.240 13:36:47 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.240 13:36:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.240 13:36:47 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:35.240 13:36:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.240 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:35.240 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7265 00:18:35.240 [2024-05-15 13:36:48.251926] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:35.240 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/05/15 13:36:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7265 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:35.240 request: 00:18:35.240 { 00:18:35.240 "method": "nvmf_create_subsystem", 00:18:35.240 "params": { 00:18:35.240 "nqn": "nqn.2016-06.io.spdk:cnode7265", 00:18:35.240 "tgt_name": "foobar" 00:18:35.240 } 00:18:35.240 } 00:18:35.240 Got JSON-RPC error response 00:18:35.240 GoRPCClient: error on JSON-RPC call' 00:18:35.240 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/05/15 13:36:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode7265 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:18:35.240 request: 00:18:35.240 { 00:18:35.240 "method": "nvmf_create_subsystem", 00:18:35.240 "params": { 00:18:35.240 "nqn": "nqn.2016-06.io.spdk:cnode7265", 00:18:35.240 "tgt_name": "foobar" 00:18:35.240 } 00:18:35.240 } 00:18:35.240 Got JSON-RPC error response 00:18:35.240 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:35.240 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:35.240 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19970 00:18:35.526 [2024-05-15 13:36:48.500338] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19970: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:35.526 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/05/15 13:36:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19970 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:35.526 request: 00:18:35.526 { 00:18:35.526 "method": "nvmf_create_subsystem", 00:18:35.526 "params": { 00:18:35.526 "nqn": "nqn.2016-06.io.spdk:cnode19970", 00:18:35.526 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:35.526 } 00:18:35.526 } 00:18:35.526 Got JSON-RPC error response 00:18:35.526 GoRPCClient: error on JSON-RPC call' 00:18:35.526 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/05/15 13:36:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode19970 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:18:35.526 request: 00:18:35.526 { 00:18:35.526 "method": "nvmf_create_subsystem", 00:18:35.526 "params": { 00:18:35.526 "nqn": "nqn.2016-06.io.spdk:cnode19970", 00:18:35.526 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:18:35.526 } 00:18:35.526 } 00:18:35.526 Got JSON-RPC error response 00:18:35.526 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:35.526 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:35.526 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26343 00:18:35.805 [2024-05-15 13:36:48.800755] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26343: invalid model number 'SPDK_Controller' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/05/15 13:36:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode26343], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:35.805 request: 00:18:35.805 { 00:18:35.805 "method": "nvmf_create_subsystem", 00:18:35.805 "params": { 00:18:35.805 "nqn": "nqn.2016-06.io.spdk:cnode26343", 00:18:35.805 "model_number": "SPDK_Controller\u001f" 00:18:35.805 } 00:18:35.805 } 00:18:35.805 Got JSON-RPC error response 00:18:35.805 GoRPCClient: error on JSON-RPC call' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/05/15 13:36:48 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode26343], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:18:35.805 request: 00:18:35.805 { 00:18:35.805 "method": "nvmf_create_subsystem", 00:18:35.805 "params": { 00:18:35.805 "nqn": "nqn.2016-06.io.spdk:cnode26343", 00:18:35.805 "model_number": "SPDK_Controller\u001f" 00:18:35.805 } 00:18:35.805 } 00:18:35.805 Got JSON-RPC error response 00:18:35.805 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.805 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:35.806 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ 5 == \- ]] 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '5^6PqG82 V`*x`y,[/$aZ' 00:18:36.065 13:36:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '5^6PqG82 V`*x`y,[/$aZ' nqn.2016-06.io.spdk:cnode16364 00:18:36.065 [2024-05-15 13:36:49.133226] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16364: invalid serial number '5^6PqG82 V`*x`y,[/$aZ' 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/05/15 13:36:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16364 serial_number:5^6PqG82 V`*x`y,[/$aZ], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 5^6PqG82 V`*x`y,[/$aZ 00:18:36.065 request: 00:18:36.065 { 00:18:36.065 "method": "nvmf_create_subsystem", 00:18:36.065 "params": { 00:18:36.065 "nqn": "nqn.2016-06.io.spdk:cnode16364", 00:18:36.065 "serial_number": "5^6PqG82 V`*x`y,[/$aZ" 00:18:36.065 } 00:18:36.065 } 00:18:36.065 Got JSON-RPC error response 00:18:36.065 GoRPCClient: error on JSON-RPC call' 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/05/15 13:36:49 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode16364 serial_number:5^6PqG82 V`*x`y,[/$aZ], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN 5^6PqG82 V`*x`y,[/$aZ 00:18:36.065 request: 00:18:36.065 { 00:18:36.065 "method": "nvmf_create_subsystem", 00:18:36.065 "params": { 00:18:36.065 "nqn": "nqn.2016-06.io.spdk:cnode16364", 00:18:36.065 "serial_number": "5^6PqG82 V`*x`y,[/$aZ" 00:18:36.065 } 00:18:36.065 } 00:18:36.065 Got JSON-RPC error response 00:18:36.065 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.065 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:18:36.325 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:18:36.326 13:36:49 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'n|w>w0utHXz-5(){!?6csj0T5XT3w0utHXz-5(){!?6csj0T5XT3w0utHXz-5(){!?6csj0T5XT3w0utHXz-5(){!?6csj0T5XT3w0utHXz-5(){!?6csj0T5XT3w0utHXz-5(){!?6csj0T5XT3 /dev/null' 00:18:39.193 13:36:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.193 13:36:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:39.193 00:18:39.193 real 0m5.667s 00:18:39.193 user 0m22.675s 00:18:39.193 sys 0m1.242s 00:18:39.193 13:36:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:39.193 13:36:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:39.193 ************************************ 00:18:39.193 END TEST nvmf_invalid 00:18:39.193 ************************************ 00:18:39.193 13:36:52 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:39.193 13:36:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:39.193 13:36:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:39.193 13:36:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:39.193 ************************************ 00:18:39.193 START TEST nvmf_abort 00:18:39.193 ************************************ 00:18:39.193 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:18:39.452 * Looking for test storage... 00:18:39.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:39.452 13:36:52 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:39.452 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:18:39.452 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:39.453 Cannot find device "nvmf_tgt_br" 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:39.453 Cannot find device "nvmf_tgt_br2" 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:39.453 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:39.453 Cannot find device "nvmf_tgt_br" 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:39.454 Cannot find device "nvmf_tgt_br2" 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:39.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:39.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:39.454 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:39.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:18:39.714 00:18:39.714 --- 10.0.0.2 ping statistics --- 00:18:39.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.714 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:39.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:39.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:18:39.714 00:18:39.714 --- 10.0.0.3 ping statistics --- 00:18:39.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.714 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:39.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:18:39.714 00:18:39.714 --- 10.0.0.1 ping statistics --- 00:18:39.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.714 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.714 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=85060 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 85060 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 85060 ']' 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:39.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:39.715 13:36:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:39.715 [2024-05-15 13:36:52.711394] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:39.715 [2024-05-15 13:36:52.711513] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.973 [2024-05-15 13:36:52.835950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:39.973 [2024-05-15 13:36:52.852593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:39.973 [2024-05-15 13:36:52.949410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.973 [2024-05-15 13:36:52.949494] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.973 [2024-05-15 13:36:52.949521] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.973 [2024-05-15 13:36:52.949529] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.973 [2024-05-15 13:36:52.949536] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.973 [2024-05-15 13:36:52.949662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.973 [2024-05-15 13:36:52.949815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.973 [2024-05-15 13:36:52.950491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 [2024-05-15 13:36:53.725618] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 Malloc0 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 Delay0 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 [2024-05-15 13:36:53.797752] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:40.914 [2024-05-15 13:36:53.798052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.914 13:36:53 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:18:40.914 [2024-05-15 13:36:53.997952] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:43.441 Initializing NVMe Controllers 00:18:43.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:18:43.441 controller IO queue size 128 less than required 00:18:43.441 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:18:43.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:18:43.441 Initialization complete. Launching workers. 00:18:43.441 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 31995 00:18:43.441 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32060, failed to submit 62 00:18:43.441 success 31999, unsuccess 61, failed 0 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:43.441 rmmod nvme_tcp 00:18:43.441 rmmod nvme_fabrics 00:18:43.441 rmmod nvme_keyring 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 85060 ']' 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 85060 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 85060 ']' 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 85060 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85060 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85060' 00:18:43.441 killing process with pid 85060 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 85060 00:18:43.441 [2024-05-15 13:36:56.163328] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 85060 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:43.441 00:18:43.441 real 0m4.203s 00:18:43.441 user 0m12.221s 00:18:43.441 sys 0m1.003s 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:43.441 13:36:56 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:18:43.441 ************************************ 00:18:43.441 END TEST nvmf_abort 00:18:43.441 ************************************ 00:18:43.441 13:36:56 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:43.441 13:36:56 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:43.441 13:36:56 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:43.441 13:36:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:43.441 ************************************ 00:18:43.441 START TEST nvmf_ns_hotplug_stress 00:18:43.441 ************************************ 00:18:43.441 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:18:43.700 * Looking for test storage... 00:18:43.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.700 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:43.701 Cannot find device "nvmf_tgt_br" 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.701 Cannot find device "nvmf_tgt_br2" 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:43.701 Cannot find device "nvmf_tgt_br" 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:43.701 Cannot find device "nvmf_tgt_br2" 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.701 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:43.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:43.960 00:18:43.960 --- 10.0.0.2 ping statistics --- 00:18:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.960 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:43.960 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.960 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:18:43.960 00:18:43.960 --- 10.0.0.3 ping statistics --- 00:18:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.960 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:43.960 00:18:43.960 --- 10.0.0.1 ping statistics --- 00:18:43.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.960 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=85321 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 85321 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 85321 ']' 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.960 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:43.961 13:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.961 [2024-05-15 13:36:57.043672] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:18:43.961 [2024-05-15 13:36:57.044196] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.218 [2024-05-15 13:36:57.165217] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:44.218 [2024-05-15 13:36:57.184621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:44.218 [2024-05-15 13:36:57.296996] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.218 [2024-05-15 13:36:57.297069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.218 [2024-05-15 13:36:57.297084] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.218 [2024-05-15 13:36:57.297094] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.218 [2024-05-15 13:36:57.297104] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.218 [2024-05-15 13:36:57.297229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.218 [2024-05-15 13:36:57.297364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.218 [2024-05-15 13:36:57.297371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:18:45.151 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:45.408 [2024-05-15 13:36:58.326743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:45.408 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:45.666 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.922 [2024-05-15 13:36:58.895271] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:45.922 [2024-05-15 13:36:58.895726] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:45.922 13:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:46.179 13:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:18:46.500 Malloc0 00:18:46.500 13:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:46.772 Delay0 00:18:46.772 13:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:47.030 13:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:18:47.287 NULL1 00:18:47.287 13:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:47.545 13:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=85457 00:18:47.545 13:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:18:47.545 13:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:47.545 13:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:49.022 Read completed with error (sct=0, sc=11) 00:18:49.022 13:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:49.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:49.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:49.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:49.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:49.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:18:49.022 13:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:18:49.022 13:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:18:49.278 true 00:18:49.278 13:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:49.278 13:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.213 13:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:50.469 13:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:18:50.469 13:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:18:50.725 true 00:18:50.725 13:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:50.725 13:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.982 13:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:51.238 13:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:18:51.238 13:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:18:51.495 true 00:18:51.495 13:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:51.495 13:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:51.753 13:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:52.010 13:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:18:52.010 13:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:18:52.268 true 00:18:52.268 13:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:52.268 13:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:53.202 13:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:53.459 13:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:18:53.459 13:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:18:53.731 true 00:18:53.732 13:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:53.732 13:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:54.009 13:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:54.267 13:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:18:54.267 13:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:18:54.267 true 00:18:54.525 13:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:54.525 13:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:54.782 13:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:55.040 13:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:18:55.040 13:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:18:55.040 true 00:18:55.298 13:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:55.298 13:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:56.232 13:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:56.490 13:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:18:56.490 13:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:18:56.748 true 00:18:56.748 13:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:56.748 13:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:57.005 13:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:57.264 13:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:18:57.264 13:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:18:57.522 true 00:18:57.522 13:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:57.522 13:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:57.780 13:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:58.041 13:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:18:58.041 13:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:18:58.302 true 00:18:58.302 13:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:58.302 13:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:59.240 13:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:59.499 13:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:18:59.499 13:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:18:59.757 true 00:18:59.757 13:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:18:59.757 13:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:00.016 13:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:00.274 13:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:19:00.274 13:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:19:00.563 true 00:19:00.563 13:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:00.563 13:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:00.563 13:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:00.820 13:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:19:00.821 13:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:19:01.078 true 00:19:01.078 13:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:01.078 13:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:02.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:02.457 13:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:02.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:02.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:02.457 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:02.457 13:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:19:02.457 13:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:19:02.716 true 00:19:02.716 13:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:02.716 13:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:03.650 13:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:03.650 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:19:03.650 13:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:19:03.650 13:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:19:03.908 true 00:19:03.908 13:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:03.908 13:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:04.210 13:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:04.468 13:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:19:04.468 13:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:19:04.726 true 00:19:04.726 13:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:04.726 13:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:05.659 13:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:05.659 13:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:19:05.659 13:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:19:05.918 true 00:19:05.918 13:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:05.918 13:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:06.176 13:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:06.434 13:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:19:06.434 13:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:19:06.693 true 00:19:06.693 13:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:06.693 13:37:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:07.627 13:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:07.627 13:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:19:07.627 13:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:19:07.885 true 00:19:07.885 13:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:07.885 13:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:08.143 13:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:08.401 13:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:19:08.401 13:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:19:08.659 true 00:19:08.659 13:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:08.659 13:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:09.225 13:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:09.225 13:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:19:09.225 13:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:19:09.482 true 00:19:09.482 13:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:09.482 13:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.417 13:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:10.674 13:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:19:10.674 13:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:19:10.932 true 00:19:10.932 13:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:10.932 13:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.190 13:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:11.448 13:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:19:11.448 13:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:19:11.752 true 00:19:11.752 13:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:11.752 13:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:12.685 13:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:12.685 13:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:19:12.685 13:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:19:12.943 true 00:19:13.201 13:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:13.201 13:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:13.459 13:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:13.717 13:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:19:13.717 13:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:19:13.717 true 00:19:13.717 13:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:13.717 13:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:14.283 13:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:14.283 13:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:19:14.283 13:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:19:14.542 true 00:19:14.542 13:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:14.542 13:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:15.477 13:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:15.735 13:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:19:15.735 13:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:19:15.994 true 00:19:15.994 13:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:15.994 13:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:16.251 13:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:16.511 13:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:19:16.511 13:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:19:16.771 true 00:19:17.030 13:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:17.030 13:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:17.289 13:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:17.547 13:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:19:17.547 13:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:19:17.806 true 00:19:17.806 13:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:17.806 13:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:18.741 Initializing NVMe Controllers 00:19:18.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.741 Controller IO queue size 128, less than required. 00:19:18.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.741 Controller IO queue size 128, less than required. 00:19:18.741 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:18.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:19:18.741 Initialization complete. Launching workers. 00:19:18.741 ======================================================== 00:19:18.741 Latency(us) 00:19:18.741 Device Information : IOPS MiB/s Average min max 00:19:18.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 406.00 0.20 151073.68 5237.15 1158054.97 00:19:18.741 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9003.87 4.40 14215.38 3708.10 644620.30 00:19:18.741 ======================================================== 00:19:18.741 Total : 9409.87 4.59 20120.30 3708.10 1158054.97 00:19:18.741 00:19:18.741 13:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:18.741 13:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:19:18.741 13:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:19:18.999 true 00:19:18.999 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 85457 00:19:18.999 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (85457) - No such process 00:19:18.999 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 85457 00:19:18.999 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:19.257 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:19.516 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:19:19.516 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:19:19.516 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:19:19.516 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:19.516 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:19:19.774 null0 00:19:19.774 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:19.774 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:19.774 13:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:19:20.032 null1 00:19:20.032 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:20.032 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:20.032 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:19:20.290 null2 00:19:20.290 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:20.290 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:20.290 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:19:20.547 null3 00:19:20.547 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:20.547 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:20.547 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:19:20.804 null4 00:19:20.804 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:20.804 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:20.804 13:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:19:21.063 null5 00:19:21.063 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:21.063 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:21.063 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:19:21.321 null6 00:19:21.321 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:21.321 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:21.321 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:19:21.580 null7 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:21.580 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 86511 86512 86515 86517 86518 86520 86523 86524 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:21.581 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:21.838 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:21.838 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:21.838 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:21.838 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:22.096 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.096 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:22.096 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:22.096 13:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.096 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:22.354 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.611 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.612 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:22.612 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:22.870 13:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:23.127 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:23.384 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:23.640 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:23.897 13:37:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.200 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.458 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.716 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:24.973 13:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:24.973 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.973 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.974 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:24.974 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:24.974 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:24.974 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:24.974 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:25.231 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:25.488 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.489 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.489 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:25.489 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:25.489 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.489 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.489 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:25.746 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.746 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.746 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:25.746 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.746 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:25.747 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:26.004 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:26.004 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.004 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:26.004 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:26.004 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.004 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.004 13:37:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:26.004 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:26.004 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:26.004 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.004 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.004 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.263 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.264 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.522 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:19:26.780 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:19:27.038 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:19:27.038 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:27.038 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:27.038 13:37:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:27.038 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:27.296 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:27.297 rmmod nvme_tcp 00:19:27.297 rmmod nvme_fabrics 00:19:27.297 rmmod nvme_keyring 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 85321 ']' 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 85321 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 85321 ']' 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 85321 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85321 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:27.297 killing process with pid 85321 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85321' 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 85321 00:19:27.297 [2024-05-15 13:37:40.346978] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:27.297 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 85321 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:27.556 ************************************ 00:19:27.556 END TEST nvmf_ns_hotplug_stress 00:19:27.556 ************************************ 00:19:27.556 00:19:27.556 real 0m44.141s 00:19:27.556 user 3m31.907s 00:19:27.556 sys 0m13.287s 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:27.556 13:37:40 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:19:27.829 13:37:40 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:27.829 13:37:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:27.829 13:37:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:27.829 13:37:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:27.829 ************************************ 00:19:27.829 START TEST nvmf_connect_stress 00:19:27.829 ************************************ 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:19:27.829 * Looking for test storage... 00:19:27.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:27.829 Cannot find device "nvmf_tgt_br" 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:19:27.829 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:27.830 Cannot find device "nvmf_tgt_br2" 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:27.830 Cannot find device "nvmf_tgt_br" 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:27.830 Cannot find device "nvmf_tgt_br2" 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:27.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:19:27.830 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:27.830 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:28.087 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:28.088 13:37:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:28.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:19:28.088 00:19:28.088 --- 10.0.0.2 ping statistics --- 00:19:28.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.088 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:28.088 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:28.088 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:19:28.088 00:19:28.088 --- 10.0.0.3 ping statistics --- 00:19:28.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.088 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:28.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:28.088 00:19:28.088 --- 10.0.0.1 ping statistics --- 00:19:28.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.088 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=87834 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 87834 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 87834 ']' 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:28.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:28.088 13:37:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:28.088 [2024-05-15 13:37:41.176326] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:28.088 [2024-05-15 13:37:41.176429] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.348 [2024-05-15 13:37:41.304918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:28.348 [2024-05-15 13:37:41.317250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:28.348 [2024-05-15 13:37:41.417232] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.348 [2024-05-15 13:37:41.417292] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.348 [2024-05-15 13:37:41.417304] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.348 [2024-05-15 13:37:41.417312] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.348 [2024-05-15 13:37:41.417320] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.348 [2024-05-15 13:37:41.417637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.348 [2024-05-15 13:37:41.417992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.348 [2024-05-15 13:37:41.417997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 [2024-05-15 13:37:42.231827] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 [2024-05-15 13:37:42.251766] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:29.281 [2024-05-15 13:37:42.252027] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:29.281 NULL1 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=87886 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.281 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.282 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:29.848 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.848 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:29.848 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:29.848 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.848 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:30.106 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.106 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:30.106 13:37:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:30.106 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.106 13:37:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:30.364 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.364 13:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:30.364 13:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:30.364 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.364 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:30.622 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.622 13:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:30.622 13:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:30.622 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.622 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:30.880 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.880 13:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:30.880 13:37:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:30.880 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.880 13:37:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:31.445 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.445 13:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:31.445 13:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:31.446 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.446 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:31.704 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.704 13:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:31.704 13:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:31.704 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.704 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:31.961 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.962 13:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:31.962 13:37:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:31.962 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.962 13:37:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:32.220 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.220 13:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:32.220 13:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:32.220 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.220 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.812 13:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.379 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.379 13:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:33.379 13:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:33.379 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.379 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.636 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.636 13:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:33.636 13:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:33.636 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.636 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.892 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.892 13:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:33.892 13:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:33.892 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.892 13:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.149 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.149 13:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:34.149 13:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:34.149 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.149 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.715 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.715 13:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:34.715 13:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:34.715 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.715 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.972 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.972 13:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:34.972 13:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:34.972 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.972 13:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.234 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.234 13:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:35.234 13:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.234 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.234 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.492 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.492 13:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:35.492 13:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.492 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.492 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.763 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.763 13:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:35.763 13:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.763 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.763 13:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.328 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.328 13:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:36.328 13:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.328 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.328 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.586 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.586 13:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:36.586 13:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.586 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.586 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.844 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.844 13:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:36.844 13:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.844 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.844 13:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.101 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.101 13:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:37.101 13:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.101 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.101 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.359 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.359 13:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:37.359 13:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.359 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.359 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.924 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.925 13:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:37.925 13:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.925 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.925 13:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.182 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.182 13:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:38.182 13:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.183 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.183 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.440 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.440 13:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:38.440 13:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.440 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.440 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.698 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.698 13:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:38.698 13:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.698 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.698 13:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.956 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.956 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:38.956 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.956 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.956 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.520 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.520 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:39.520 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.520 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.520 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.520 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 87886 00:19:39.778 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (87886) - No such process 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 87886 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.778 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.778 rmmod nvme_tcp 00:19:39.778 rmmod nvme_fabrics 00:19:39.778 rmmod nvme_keyring 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 87834 ']' 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 87834 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 87834 ']' 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 87834 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87834 00:19:39.779 killing process with pid 87834 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87834' 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 87834 00:19:39.779 [2024-05-15 13:37:52.797545] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:39.779 13:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 87834 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:40.038 00:19:40.038 real 0m12.386s 00:19:40.038 user 0m41.182s 00:19:40.038 sys 0m3.438s 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:40.038 13:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.038 ************************************ 00:19:40.038 END TEST nvmf_connect_stress 00:19:40.038 ************************************ 00:19:40.038 13:37:53 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:40.038 13:37:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:40.038 13:37:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:40.038 13:37:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:40.038 ************************************ 00:19:40.038 START TEST nvmf_fused_ordering 00:19:40.038 ************************************ 00:19:40.038 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:19:40.315 * Looking for test storage... 00:19:40.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.315 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:40.316 Cannot find device "nvmf_tgt_br" 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.316 Cannot find device "nvmf_tgt_br2" 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:40.316 Cannot find device "nvmf_tgt_br" 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:40.316 Cannot find device "nvmf_tgt_br2" 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.316 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:40.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:19:40.575 00:19:40.575 --- 10.0.0.2 ping statistics --- 00:19:40.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.575 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:40.575 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.575 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:19:40.575 00:19:40.575 --- 10.0.0.3 ping statistics --- 00:19:40.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.575 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:40.575 00:19:40.575 --- 10.0.0.1 ping statistics --- 00:19:40.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.575 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=88209 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 88209 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 88209 ']' 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.575 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:40.576 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.576 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:40.576 13:37:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:40.576 [2024-05-15 13:37:53.629406] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:40.576 [2024-05-15 13:37:53.629505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.834 [2024-05-15 13:37:53.749740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:40.834 [2024-05-15 13:37:53.769748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.834 [2024-05-15 13:37:53.869303] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.834 [2024-05-15 13:37:53.869357] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.834 [2024-05-15 13:37:53.869370] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.834 [2024-05-15 13:37:53.869392] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.834 [2024-05-15 13:37:53.869401] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.834 [2024-05-15 13:37:53.869429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 [2024-05-15 13:37:54.743248] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 [2024-05-15 13:37:54.763156] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:41.770 [2024-05-15 13:37:54.763385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 NULL1 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.770 13:37:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:41.770 [2024-05-15 13:37:54.814299] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:41.770 [2024-05-15 13:37:54.814357] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88259 ] 00:19:42.028 [2024-05-15 13:37:54.941026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:42.287 Attached to nqn.2016-06.io.spdk:cnode1 00:19:42.287 Namespace ID: 1 size: 1GB 00:19:42.287 fused_ordering(0) 00:19:42.287 fused_ordering(1) 00:19:42.287 fused_ordering(2) 00:19:42.287 fused_ordering(3) 00:19:42.287 fused_ordering(4) 00:19:42.287 fused_ordering(5) 00:19:42.287 fused_ordering(6) 00:19:42.287 fused_ordering(7) 00:19:42.287 fused_ordering(8) 00:19:42.287 fused_ordering(9) 00:19:42.287 fused_ordering(10) 00:19:42.287 fused_ordering(11) 00:19:42.287 fused_ordering(12) 00:19:42.287 fused_ordering(13) 00:19:42.287 fused_ordering(14) 00:19:42.287 fused_ordering(15) 00:19:42.287 fused_ordering(16) 00:19:42.287 fused_ordering(17) 00:19:42.287 fused_ordering(18) 00:19:42.287 fused_ordering(19) 00:19:42.287 fused_ordering(20) 00:19:42.287 fused_ordering(21) 00:19:42.287 fused_ordering(22) 00:19:42.287 fused_ordering(23) 00:19:42.287 fused_ordering(24) 00:19:42.287 fused_ordering(25) 00:19:42.287 fused_ordering(26) 00:19:42.287 fused_ordering(27) 00:19:42.287 fused_ordering(28) 00:19:42.287 fused_ordering(29) 00:19:42.287 fused_ordering(30) 00:19:42.287 fused_ordering(31) 00:19:42.287 fused_ordering(32) 00:19:42.287 fused_ordering(33) 00:19:42.287 fused_ordering(34) 00:19:42.287 fused_ordering(35) 00:19:42.287 fused_ordering(36) 00:19:42.287 fused_ordering(37) 00:19:42.287 fused_ordering(38) 00:19:42.287 fused_ordering(39) 00:19:42.287 fused_ordering(40) 00:19:42.287 fused_ordering(41) 00:19:42.287 fused_ordering(42) 00:19:42.287 fused_ordering(43) 00:19:42.287 fused_ordering(44) 00:19:42.287 fused_ordering(45) 00:19:42.287 fused_ordering(46) 00:19:42.287 fused_ordering(47) 00:19:42.287 fused_ordering(48) 00:19:42.287 fused_ordering(49) 00:19:42.287 fused_ordering(50) 00:19:42.287 fused_ordering(51) 00:19:42.287 fused_ordering(52) 00:19:42.287 fused_ordering(53) 00:19:42.287 fused_ordering(54) 00:19:42.287 fused_ordering(55) 00:19:42.287 fused_ordering(56) 00:19:42.287 fused_ordering(57) 00:19:42.287 fused_ordering(58) 00:19:42.287 fused_ordering(59) 00:19:42.287 fused_ordering(60) 00:19:42.287 fused_ordering(61) 00:19:42.287 fused_ordering(62) 00:19:42.287 fused_ordering(63) 00:19:42.287 fused_ordering(64) 00:19:42.287 fused_ordering(65) 00:19:42.287 fused_ordering(66) 00:19:42.287 fused_ordering(67) 00:19:42.287 fused_ordering(68) 00:19:42.287 fused_ordering(69) 00:19:42.287 fused_ordering(70) 00:19:42.287 fused_ordering(71) 00:19:42.287 fused_ordering(72) 00:19:42.287 fused_ordering(73) 00:19:42.287 fused_ordering(74) 00:19:42.287 fused_ordering(75) 00:19:42.287 fused_ordering(76) 00:19:42.287 fused_ordering(77) 00:19:42.287 fused_ordering(78) 00:19:42.287 fused_ordering(79) 00:19:42.287 fused_ordering(80) 00:19:42.287 fused_ordering(81) 00:19:42.287 fused_ordering(82) 00:19:42.287 fused_ordering(83) 00:19:42.287 fused_ordering(84) 00:19:42.287 fused_ordering(85) 00:19:42.287 fused_ordering(86) 00:19:42.287 fused_ordering(87) 00:19:42.287 fused_ordering(88) 00:19:42.287 fused_ordering(89) 00:19:42.287 fused_ordering(90) 00:19:42.287 fused_ordering(91) 00:19:42.287 fused_ordering(92) 00:19:42.287 fused_ordering(93) 00:19:42.287 fused_ordering(94) 00:19:42.287 fused_ordering(95) 00:19:42.287 fused_ordering(96) 00:19:42.287 fused_ordering(97) 00:19:42.287 fused_ordering(98) 00:19:42.287 fused_ordering(99) 00:19:42.287 fused_ordering(100) 00:19:42.287 fused_ordering(101) 00:19:42.287 fused_ordering(102) 00:19:42.287 fused_ordering(103) 00:19:42.287 fused_ordering(104) 00:19:42.287 fused_ordering(105) 00:19:42.287 fused_ordering(106) 00:19:42.287 fused_ordering(107) 00:19:42.287 fused_ordering(108) 00:19:42.287 fused_ordering(109) 00:19:42.287 fused_ordering(110) 00:19:42.287 fused_ordering(111) 00:19:42.288 fused_ordering(112) 00:19:42.288 fused_ordering(113) 00:19:42.288 fused_ordering(114) 00:19:42.288 fused_ordering(115) 00:19:42.288 fused_ordering(116) 00:19:42.288 fused_ordering(117) 00:19:42.288 fused_ordering(118) 00:19:42.288 fused_ordering(119) 00:19:42.288 fused_ordering(120) 00:19:42.288 fused_ordering(121) 00:19:42.288 fused_ordering(122) 00:19:42.288 fused_ordering(123) 00:19:42.288 fused_ordering(124) 00:19:42.288 fused_ordering(125) 00:19:42.288 fused_ordering(126) 00:19:42.288 fused_ordering(127) 00:19:42.288 fused_ordering(128) 00:19:42.288 fused_ordering(129) 00:19:42.288 fused_ordering(130) 00:19:42.288 fused_ordering(131) 00:19:42.288 fused_ordering(132) 00:19:42.288 fused_ordering(133) 00:19:42.288 fused_ordering(134) 00:19:42.288 fused_ordering(135) 00:19:42.288 fused_ordering(136) 00:19:42.288 fused_ordering(137) 00:19:42.288 fused_ordering(138) 00:19:42.288 fused_ordering(139) 00:19:42.288 fused_ordering(140) 00:19:42.288 fused_ordering(141) 00:19:42.288 fused_ordering(142) 00:19:42.288 fused_ordering(143) 00:19:42.288 fused_ordering(144) 00:19:42.288 fused_ordering(145) 00:19:42.288 fused_ordering(146) 00:19:42.288 fused_ordering(147) 00:19:42.288 fused_ordering(148) 00:19:42.288 fused_ordering(149) 00:19:42.288 fused_ordering(150) 00:19:42.288 fused_ordering(151) 00:19:42.288 fused_ordering(152) 00:19:42.288 fused_ordering(153) 00:19:42.288 fused_ordering(154) 00:19:42.288 fused_ordering(155) 00:19:42.288 fused_ordering(156) 00:19:42.288 fused_ordering(157) 00:19:42.288 fused_ordering(158) 00:19:42.288 fused_ordering(159) 00:19:42.288 fused_ordering(160) 00:19:42.288 fused_ordering(161) 00:19:42.288 fused_ordering(162) 00:19:42.288 fused_ordering(163) 00:19:42.288 fused_ordering(164) 00:19:42.288 fused_ordering(165) 00:19:42.288 fused_ordering(166) 00:19:42.288 fused_ordering(167) 00:19:42.288 fused_ordering(168) 00:19:42.288 fused_ordering(169) 00:19:42.288 fused_ordering(170) 00:19:42.288 fused_ordering(171) 00:19:42.288 fused_ordering(172) 00:19:42.288 fused_ordering(173) 00:19:42.288 fused_ordering(174) 00:19:42.288 fused_ordering(175) 00:19:42.288 fused_ordering(176) 00:19:42.288 fused_ordering(177) 00:19:42.288 fused_ordering(178) 00:19:42.288 fused_ordering(179) 00:19:42.288 fused_ordering(180) 00:19:42.288 fused_ordering(181) 00:19:42.288 fused_ordering(182) 00:19:42.288 fused_ordering(183) 00:19:42.288 fused_ordering(184) 00:19:42.288 fused_ordering(185) 00:19:42.288 fused_ordering(186) 00:19:42.288 fused_ordering(187) 00:19:42.288 fused_ordering(188) 00:19:42.288 fused_ordering(189) 00:19:42.288 fused_ordering(190) 00:19:42.288 fused_ordering(191) 00:19:42.288 fused_ordering(192) 00:19:42.288 fused_ordering(193) 00:19:42.288 fused_ordering(194) 00:19:42.288 fused_ordering(195) 00:19:42.288 fused_ordering(196) 00:19:42.288 fused_ordering(197) 00:19:42.288 fused_ordering(198) 00:19:42.288 fused_ordering(199) 00:19:42.288 fused_ordering(200) 00:19:42.288 fused_ordering(201) 00:19:42.288 fused_ordering(202) 00:19:42.288 fused_ordering(203) 00:19:42.288 fused_ordering(204) 00:19:42.288 fused_ordering(205) 00:19:42.547 fused_ordering(206) 00:19:42.547 fused_ordering(207) 00:19:42.547 fused_ordering(208) 00:19:42.547 fused_ordering(209) 00:19:42.547 fused_ordering(210) 00:19:42.547 fused_ordering(211) 00:19:42.547 fused_ordering(212) 00:19:42.547 fused_ordering(213) 00:19:42.547 fused_ordering(214) 00:19:42.547 fused_ordering(215) 00:19:42.547 fused_ordering(216) 00:19:42.547 fused_ordering(217) 00:19:42.547 fused_ordering(218) 00:19:42.547 fused_ordering(219) 00:19:42.547 fused_ordering(220) 00:19:42.547 fused_ordering(221) 00:19:42.547 fused_ordering(222) 00:19:42.547 fused_ordering(223) 00:19:42.547 fused_ordering(224) 00:19:42.547 fused_ordering(225) 00:19:42.547 fused_ordering(226) 00:19:42.547 fused_ordering(227) 00:19:42.547 fused_ordering(228) 00:19:42.547 fused_ordering(229) 00:19:42.547 fused_ordering(230) 00:19:42.547 fused_ordering(231) 00:19:42.547 fused_ordering(232) 00:19:42.547 fused_ordering(233) 00:19:42.547 fused_ordering(234) 00:19:42.547 fused_ordering(235) 00:19:42.547 fused_ordering(236) 00:19:42.547 fused_ordering(237) 00:19:42.547 fused_ordering(238) 00:19:42.547 fused_ordering(239) 00:19:42.547 fused_ordering(240) 00:19:42.547 fused_ordering(241) 00:19:42.547 fused_ordering(242) 00:19:42.547 fused_ordering(243) 00:19:42.547 fused_ordering(244) 00:19:42.547 fused_ordering(245) 00:19:42.547 fused_ordering(246) 00:19:42.547 fused_ordering(247) 00:19:42.547 fused_ordering(248) 00:19:42.547 fused_ordering(249) 00:19:42.547 fused_ordering(250) 00:19:42.547 fused_ordering(251) 00:19:42.547 fused_ordering(252) 00:19:42.547 fused_ordering(253) 00:19:42.547 fused_ordering(254) 00:19:42.547 fused_ordering(255) 00:19:42.547 fused_ordering(256) 00:19:42.547 fused_ordering(257) 00:19:42.547 fused_ordering(258) 00:19:42.547 fused_ordering(259) 00:19:42.547 fused_ordering(260) 00:19:42.547 fused_ordering(261) 00:19:42.547 fused_ordering(262) 00:19:42.547 fused_ordering(263) 00:19:42.547 fused_ordering(264) 00:19:42.547 fused_ordering(265) 00:19:42.547 fused_ordering(266) 00:19:42.547 fused_ordering(267) 00:19:42.547 fused_ordering(268) 00:19:42.547 fused_ordering(269) 00:19:42.547 fused_ordering(270) 00:19:42.547 fused_ordering(271) 00:19:42.547 fused_ordering(272) 00:19:42.547 fused_ordering(273) 00:19:42.547 fused_ordering(274) 00:19:42.547 fused_ordering(275) 00:19:42.547 fused_ordering(276) 00:19:42.547 fused_ordering(277) 00:19:42.547 fused_ordering(278) 00:19:42.547 fused_ordering(279) 00:19:42.547 fused_ordering(280) 00:19:42.547 fused_ordering(281) 00:19:42.547 fused_ordering(282) 00:19:42.547 fused_ordering(283) 00:19:42.547 fused_ordering(284) 00:19:42.547 fused_ordering(285) 00:19:42.547 fused_ordering(286) 00:19:42.547 fused_ordering(287) 00:19:42.547 fused_ordering(288) 00:19:42.547 fused_ordering(289) 00:19:42.547 fused_ordering(290) 00:19:42.547 fused_ordering(291) 00:19:42.547 fused_ordering(292) 00:19:42.547 fused_ordering(293) 00:19:42.547 fused_ordering(294) 00:19:42.547 fused_ordering(295) 00:19:42.547 fused_ordering(296) 00:19:42.547 fused_ordering(297) 00:19:42.547 fused_ordering(298) 00:19:42.547 fused_ordering(299) 00:19:42.547 fused_ordering(300) 00:19:42.547 fused_ordering(301) 00:19:42.547 fused_ordering(302) 00:19:42.547 fused_ordering(303) 00:19:42.547 fused_ordering(304) 00:19:42.547 fused_ordering(305) 00:19:42.547 fused_ordering(306) 00:19:42.547 fused_ordering(307) 00:19:42.547 fused_ordering(308) 00:19:42.547 fused_ordering(309) 00:19:42.547 fused_ordering(310) 00:19:42.547 fused_ordering(311) 00:19:42.547 fused_ordering(312) 00:19:42.547 fused_ordering(313) 00:19:42.547 fused_ordering(314) 00:19:42.547 fused_ordering(315) 00:19:42.547 fused_ordering(316) 00:19:42.547 fused_ordering(317) 00:19:42.547 fused_ordering(318) 00:19:42.547 fused_ordering(319) 00:19:42.547 fused_ordering(320) 00:19:42.547 fused_ordering(321) 00:19:42.547 fused_ordering(322) 00:19:42.547 fused_ordering(323) 00:19:42.547 fused_ordering(324) 00:19:42.547 fused_ordering(325) 00:19:42.547 fused_ordering(326) 00:19:42.547 fused_ordering(327) 00:19:42.547 fused_ordering(328) 00:19:42.547 fused_ordering(329) 00:19:42.547 fused_ordering(330) 00:19:42.547 fused_ordering(331) 00:19:42.547 fused_ordering(332) 00:19:42.547 fused_ordering(333) 00:19:42.547 fused_ordering(334) 00:19:42.547 fused_ordering(335) 00:19:42.547 fused_ordering(336) 00:19:42.547 fused_ordering(337) 00:19:42.547 fused_ordering(338) 00:19:42.547 fused_ordering(339) 00:19:42.547 fused_ordering(340) 00:19:42.547 fused_ordering(341) 00:19:42.547 fused_ordering(342) 00:19:42.547 fused_ordering(343) 00:19:42.547 fused_ordering(344) 00:19:42.547 fused_ordering(345) 00:19:42.547 fused_ordering(346) 00:19:42.547 fused_ordering(347) 00:19:42.547 fused_ordering(348) 00:19:42.547 fused_ordering(349) 00:19:42.547 fused_ordering(350) 00:19:42.547 fused_ordering(351) 00:19:42.547 fused_ordering(352) 00:19:42.547 fused_ordering(353) 00:19:42.547 fused_ordering(354) 00:19:42.547 fused_ordering(355) 00:19:42.547 fused_ordering(356) 00:19:42.547 fused_ordering(357) 00:19:42.547 fused_ordering(358) 00:19:42.547 fused_ordering(359) 00:19:42.547 fused_ordering(360) 00:19:42.547 fused_ordering(361) 00:19:42.547 fused_ordering(362) 00:19:42.547 fused_ordering(363) 00:19:42.547 fused_ordering(364) 00:19:42.547 fused_ordering(365) 00:19:42.547 fused_ordering(366) 00:19:42.547 fused_ordering(367) 00:19:42.547 fused_ordering(368) 00:19:42.547 fused_ordering(369) 00:19:42.547 fused_ordering(370) 00:19:42.547 fused_ordering(371) 00:19:42.547 fused_ordering(372) 00:19:42.547 fused_ordering(373) 00:19:42.547 fused_ordering(374) 00:19:42.547 fused_ordering(375) 00:19:42.547 fused_ordering(376) 00:19:42.547 fused_ordering(377) 00:19:42.547 fused_ordering(378) 00:19:42.547 fused_ordering(379) 00:19:42.547 fused_ordering(380) 00:19:42.547 fused_ordering(381) 00:19:42.547 fused_ordering(382) 00:19:42.547 fused_ordering(383) 00:19:42.547 fused_ordering(384) 00:19:42.547 fused_ordering(385) 00:19:42.548 fused_ordering(386) 00:19:42.548 fused_ordering(387) 00:19:42.548 fused_ordering(388) 00:19:42.548 fused_ordering(389) 00:19:42.548 fused_ordering(390) 00:19:42.548 fused_ordering(391) 00:19:42.548 fused_ordering(392) 00:19:42.548 fused_ordering(393) 00:19:42.548 fused_ordering(394) 00:19:42.548 fused_ordering(395) 00:19:42.548 fused_ordering(396) 00:19:42.548 fused_ordering(397) 00:19:42.548 fused_ordering(398) 00:19:42.548 fused_ordering(399) 00:19:42.548 fused_ordering(400) 00:19:42.548 fused_ordering(401) 00:19:42.548 fused_ordering(402) 00:19:42.548 fused_ordering(403) 00:19:42.548 fused_ordering(404) 00:19:42.548 fused_ordering(405) 00:19:42.548 fused_ordering(406) 00:19:42.548 fused_ordering(407) 00:19:42.548 fused_ordering(408) 00:19:42.548 fused_ordering(409) 00:19:42.548 fused_ordering(410) 00:19:42.806 fused_ordering(411) 00:19:42.806 fused_ordering(412) 00:19:42.806 fused_ordering(413) 00:19:42.806 fused_ordering(414) 00:19:42.806 fused_ordering(415) 00:19:42.806 fused_ordering(416) 00:19:42.806 fused_ordering(417) 00:19:42.806 fused_ordering(418) 00:19:42.806 fused_ordering(419) 00:19:42.806 fused_ordering(420) 00:19:42.806 fused_ordering(421) 00:19:42.806 fused_ordering(422) 00:19:42.806 fused_ordering(423) 00:19:42.806 fused_ordering(424) 00:19:42.806 fused_ordering(425) 00:19:42.806 fused_ordering(426) 00:19:42.806 fused_ordering(427) 00:19:42.806 fused_ordering(428) 00:19:42.806 fused_ordering(429) 00:19:42.806 fused_ordering(430) 00:19:42.806 fused_ordering(431) 00:19:42.806 fused_ordering(432) 00:19:42.806 fused_ordering(433) 00:19:42.806 fused_ordering(434) 00:19:42.806 fused_ordering(435) 00:19:42.806 fused_ordering(436) 00:19:42.806 fused_ordering(437) 00:19:42.806 fused_ordering(438) 00:19:42.806 fused_ordering(439) 00:19:42.806 fused_ordering(440) 00:19:42.806 fused_ordering(441) 00:19:42.806 fused_ordering(442) 00:19:42.806 fused_ordering(443) 00:19:42.806 fused_ordering(444) 00:19:42.806 fused_ordering(445) 00:19:42.806 fused_ordering(446) 00:19:42.806 fused_ordering(447) 00:19:42.806 fused_ordering(448) 00:19:42.806 fused_ordering(449) 00:19:42.806 fused_ordering(450) 00:19:42.806 fused_ordering(451) 00:19:42.806 fused_ordering(452) 00:19:42.806 fused_ordering(453) 00:19:42.806 fused_ordering(454) 00:19:42.806 fused_ordering(455) 00:19:42.806 fused_ordering(456) 00:19:42.806 fused_ordering(457) 00:19:42.806 fused_ordering(458) 00:19:42.806 fused_ordering(459) 00:19:42.806 fused_ordering(460) 00:19:42.806 fused_ordering(461) 00:19:42.806 fused_ordering(462) 00:19:42.806 fused_ordering(463) 00:19:42.806 fused_ordering(464) 00:19:42.806 fused_ordering(465) 00:19:42.806 fused_ordering(466) 00:19:42.806 fused_ordering(467) 00:19:42.806 fused_ordering(468) 00:19:42.806 fused_ordering(469) 00:19:42.806 fused_ordering(470) 00:19:42.806 fused_ordering(471) 00:19:42.806 fused_ordering(472) 00:19:42.806 fused_ordering(473) 00:19:42.806 fused_ordering(474) 00:19:42.806 fused_ordering(475) 00:19:42.806 fused_ordering(476) 00:19:42.806 fused_ordering(477) 00:19:42.806 fused_ordering(478) 00:19:42.806 fused_ordering(479) 00:19:42.806 fused_ordering(480) 00:19:42.806 fused_ordering(481) 00:19:42.806 fused_ordering(482) 00:19:42.806 fused_ordering(483) 00:19:42.806 fused_ordering(484) 00:19:42.806 fused_ordering(485) 00:19:42.806 fused_ordering(486) 00:19:42.806 fused_ordering(487) 00:19:42.806 fused_ordering(488) 00:19:42.806 fused_ordering(489) 00:19:42.806 fused_ordering(490) 00:19:42.806 fused_ordering(491) 00:19:42.806 fused_ordering(492) 00:19:42.806 fused_ordering(493) 00:19:42.806 fused_ordering(494) 00:19:42.807 fused_ordering(495) 00:19:42.807 fused_ordering(496) 00:19:42.807 fused_ordering(497) 00:19:42.807 fused_ordering(498) 00:19:42.807 fused_ordering(499) 00:19:42.807 fused_ordering(500) 00:19:42.807 fused_ordering(501) 00:19:42.807 fused_ordering(502) 00:19:42.807 fused_ordering(503) 00:19:42.807 fused_ordering(504) 00:19:42.807 fused_ordering(505) 00:19:42.807 fused_ordering(506) 00:19:42.807 fused_ordering(507) 00:19:42.807 fused_ordering(508) 00:19:42.807 fused_ordering(509) 00:19:42.807 fused_ordering(510) 00:19:42.807 fused_ordering(511) 00:19:42.807 fused_ordering(512) 00:19:42.807 fused_ordering(513) 00:19:42.807 fused_ordering(514) 00:19:42.807 fused_ordering(515) 00:19:42.807 fused_ordering(516) 00:19:42.807 fused_ordering(517) 00:19:42.807 fused_ordering(518) 00:19:42.807 fused_ordering(519) 00:19:42.807 fused_ordering(520) 00:19:42.807 fused_ordering(521) 00:19:42.807 fused_ordering(522) 00:19:42.807 fused_ordering(523) 00:19:42.807 fused_ordering(524) 00:19:42.807 fused_ordering(525) 00:19:42.807 fused_ordering(526) 00:19:42.807 fused_ordering(527) 00:19:42.807 fused_ordering(528) 00:19:42.807 fused_ordering(529) 00:19:42.807 fused_ordering(530) 00:19:42.807 fused_ordering(531) 00:19:42.807 fused_ordering(532) 00:19:42.807 fused_ordering(533) 00:19:42.807 fused_ordering(534) 00:19:42.807 fused_ordering(535) 00:19:42.807 fused_ordering(536) 00:19:42.807 fused_ordering(537) 00:19:42.807 fused_ordering(538) 00:19:42.807 fused_ordering(539) 00:19:42.807 fused_ordering(540) 00:19:42.807 fused_ordering(541) 00:19:42.807 fused_ordering(542) 00:19:42.807 fused_ordering(543) 00:19:42.807 fused_ordering(544) 00:19:42.807 fused_ordering(545) 00:19:42.807 fused_ordering(546) 00:19:42.807 fused_ordering(547) 00:19:42.807 fused_ordering(548) 00:19:42.807 fused_ordering(549) 00:19:42.807 fused_ordering(550) 00:19:42.807 fused_ordering(551) 00:19:42.807 fused_ordering(552) 00:19:42.807 fused_ordering(553) 00:19:42.807 fused_ordering(554) 00:19:42.807 fused_ordering(555) 00:19:42.807 fused_ordering(556) 00:19:42.807 fused_ordering(557) 00:19:42.807 fused_ordering(558) 00:19:42.807 fused_ordering(559) 00:19:42.807 fused_ordering(560) 00:19:42.807 fused_ordering(561) 00:19:42.807 fused_ordering(562) 00:19:42.807 fused_ordering(563) 00:19:42.807 fused_ordering(564) 00:19:42.807 fused_ordering(565) 00:19:42.807 fused_ordering(566) 00:19:42.807 fused_ordering(567) 00:19:42.807 fused_ordering(568) 00:19:42.807 fused_ordering(569) 00:19:42.807 fused_ordering(570) 00:19:42.807 fused_ordering(571) 00:19:42.807 fused_ordering(572) 00:19:42.807 fused_ordering(573) 00:19:42.807 fused_ordering(574) 00:19:42.807 fused_ordering(575) 00:19:42.807 fused_ordering(576) 00:19:42.807 fused_ordering(577) 00:19:42.807 fused_ordering(578) 00:19:42.807 fused_ordering(579) 00:19:42.807 fused_ordering(580) 00:19:42.807 fused_ordering(581) 00:19:42.807 fused_ordering(582) 00:19:42.807 fused_ordering(583) 00:19:42.807 fused_ordering(584) 00:19:42.807 fused_ordering(585) 00:19:42.807 fused_ordering(586) 00:19:42.807 fused_ordering(587) 00:19:42.807 fused_ordering(588) 00:19:42.807 fused_ordering(589) 00:19:42.807 fused_ordering(590) 00:19:42.807 fused_ordering(591) 00:19:42.807 fused_ordering(592) 00:19:42.807 fused_ordering(593) 00:19:42.807 fused_ordering(594) 00:19:42.807 fused_ordering(595) 00:19:42.807 fused_ordering(596) 00:19:42.807 fused_ordering(597) 00:19:42.807 fused_ordering(598) 00:19:42.807 fused_ordering(599) 00:19:42.807 fused_ordering(600) 00:19:42.807 fused_ordering(601) 00:19:42.807 fused_ordering(602) 00:19:42.807 fused_ordering(603) 00:19:42.807 fused_ordering(604) 00:19:42.807 fused_ordering(605) 00:19:42.807 fused_ordering(606) 00:19:42.807 fused_ordering(607) 00:19:42.807 fused_ordering(608) 00:19:42.807 fused_ordering(609) 00:19:42.807 fused_ordering(610) 00:19:42.807 fused_ordering(611) 00:19:42.807 fused_ordering(612) 00:19:42.807 fused_ordering(613) 00:19:42.807 fused_ordering(614) 00:19:42.807 fused_ordering(615) 00:19:43.374 fused_ordering(616) 00:19:43.374 fused_ordering(617) 00:19:43.374 fused_ordering(618) 00:19:43.374 fused_ordering(619) 00:19:43.374 fused_ordering(620) 00:19:43.374 fused_ordering(621) 00:19:43.374 fused_ordering(622) 00:19:43.374 fused_ordering(623) 00:19:43.374 fused_ordering(624) 00:19:43.374 fused_ordering(625) 00:19:43.374 fused_ordering(626) 00:19:43.374 fused_ordering(627) 00:19:43.374 fused_ordering(628) 00:19:43.374 fused_ordering(629) 00:19:43.374 fused_ordering(630) 00:19:43.374 fused_ordering(631) 00:19:43.374 fused_ordering(632) 00:19:43.374 fused_ordering(633) 00:19:43.374 fused_ordering(634) 00:19:43.374 fused_ordering(635) 00:19:43.374 fused_ordering(636) 00:19:43.374 fused_ordering(637) 00:19:43.374 fused_ordering(638) 00:19:43.374 fused_ordering(639) 00:19:43.374 fused_ordering(640) 00:19:43.374 fused_ordering(641) 00:19:43.374 fused_ordering(642) 00:19:43.374 fused_ordering(643) 00:19:43.374 fused_ordering(644) 00:19:43.374 fused_ordering(645) 00:19:43.374 fused_ordering(646) 00:19:43.374 fused_ordering(647) 00:19:43.374 fused_ordering(648) 00:19:43.374 fused_ordering(649) 00:19:43.374 fused_ordering(650) 00:19:43.374 fused_ordering(651) 00:19:43.374 fused_ordering(652) 00:19:43.374 fused_ordering(653) 00:19:43.374 fused_ordering(654) 00:19:43.374 fused_ordering(655) 00:19:43.374 fused_ordering(656) 00:19:43.374 fused_ordering(657) 00:19:43.374 fused_ordering(658) 00:19:43.374 fused_ordering(659) 00:19:43.374 fused_ordering(660) 00:19:43.374 fused_ordering(661) 00:19:43.374 fused_ordering(662) 00:19:43.374 fused_ordering(663) 00:19:43.374 fused_ordering(664) 00:19:43.374 fused_ordering(665) 00:19:43.374 fused_ordering(666) 00:19:43.374 fused_ordering(667) 00:19:43.374 fused_ordering(668) 00:19:43.374 fused_ordering(669) 00:19:43.374 fused_ordering(670) 00:19:43.374 fused_ordering(671) 00:19:43.374 fused_ordering(672) 00:19:43.374 fused_ordering(673) 00:19:43.374 fused_ordering(674) 00:19:43.374 fused_ordering(675) 00:19:43.374 fused_ordering(676) 00:19:43.374 fused_ordering(677) 00:19:43.374 fused_ordering(678) 00:19:43.374 fused_ordering(679) 00:19:43.374 fused_ordering(680) 00:19:43.374 fused_ordering(681) 00:19:43.374 fused_ordering(682) 00:19:43.374 fused_ordering(683) 00:19:43.374 fused_ordering(684) 00:19:43.374 fused_ordering(685) 00:19:43.374 fused_ordering(686) 00:19:43.374 fused_ordering(687) 00:19:43.374 fused_ordering(688) 00:19:43.374 fused_ordering(689) 00:19:43.374 fused_ordering(690) 00:19:43.374 fused_ordering(691) 00:19:43.374 fused_ordering(692) 00:19:43.374 fused_ordering(693) 00:19:43.375 fused_ordering(694) 00:19:43.375 fused_ordering(695) 00:19:43.375 fused_ordering(696) 00:19:43.375 fused_ordering(697) 00:19:43.375 fused_ordering(698) 00:19:43.375 fused_ordering(699) 00:19:43.375 fused_ordering(700) 00:19:43.375 fused_ordering(701) 00:19:43.375 fused_ordering(702) 00:19:43.375 fused_ordering(703) 00:19:43.375 fused_ordering(704) 00:19:43.375 fused_ordering(705) 00:19:43.375 fused_ordering(706) 00:19:43.375 fused_ordering(707) 00:19:43.375 fused_ordering(708) 00:19:43.375 fused_ordering(709) 00:19:43.375 fused_ordering(710) 00:19:43.375 fused_ordering(711) 00:19:43.375 fused_ordering(712) 00:19:43.375 fused_ordering(713) 00:19:43.375 fused_ordering(714) 00:19:43.375 fused_ordering(715) 00:19:43.375 fused_ordering(716) 00:19:43.375 fused_ordering(717) 00:19:43.375 fused_ordering(718) 00:19:43.375 fused_ordering(719) 00:19:43.375 fused_ordering(720) 00:19:43.375 fused_ordering(721) 00:19:43.375 fused_ordering(722) 00:19:43.375 fused_ordering(723) 00:19:43.375 fused_ordering(724) 00:19:43.375 fused_ordering(725) 00:19:43.375 fused_ordering(726) 00:19:43.375 fused_ordering(727) 00:19:43.375 fused_ordering(728) 00:19:43.375 fused_ordering(729) 00:19:43.375 fused_ordering(730) 00:19:43.375 fused_ordering(731) 00:19:43.375 fused_ordering(732) 00:19:43.375 fused_ordering(733) 00:19:43.375 fused_ordering(734) 00:19:43.375 fused_ordering(735) 00:19:43.375 fused_ordering(736) 00:19:43.375 fused_ordering(737) 00:19:43.375 fused_ordering(738) 00:19:43.375 fused_ordering(739) 00:19:43.375 fused_ordering(740) 00:19:43.375 fused_ordering(741) 00:19:43.375 fused_ordering(742) 00:19:43.375 fused_ordering(743) 00:19:43.375 fused_ordering(744) 00:19:43.375 fused_ordering(745) 00:19:43.375 fused_ordering(746) 00:19:43.375 fused_ordering(747) 00:19:43.375 fused_ordering(748) 00:19:43.375 fused_ordering(749) 00:19:43.375 fused_ordering(750) 00:19:43.375 fused_ordering(751) 00:19:43.375 fused_ordering(752) 00:19:43.375 fused_ordering(753) 00:19:43.375 fused_ordering(754) 00:19:43.375 fused_ordering(755) 00:19:43.375 fused_ordering(756) 00:19:43.375 fused_ordering(757) 00:19:43.375 fused_ordering(758) 00:19:43.375 fused_ordering(759) 00:19:43.375 fused_ordering(760) 00:19:43.375 fused_ordering(761) 00:19:43.375 fused_ordering(762) 00:19:43.375 fused_ordering(763) 00:19:43.375 fused_ordering(764) 00:19:43.375 fused_ordering(765) 00:19:43.375 fused_ordering(766) 00:19:43.375 fused_ordering(767) 00:19:43.375 fused_ordering(768) 00:19:43.375 fused_ordering(769) 00:19:43.375 fused_ordering(770) 00:19:43.375 fused_ordering(771) 00:19:43.375 fused_ordering(772) 00:19:43.375 fused_ordering(773) 00:19:43.375 fused_ordering(774) 00:19:43.375 fused_ordering(775) 00:19:43.375 fused_ordering(776) 00:19:43.375 fused_ordering(777) 00:19:43.375 fused_ordering(778) 00:19:43.375 fused_ordering(779) 00:19:43.375 fused_ordering(780) 00:19:43.375 fused_ordering(781) 00:19:43.375 fused_ordering(782) 00:19:43.375 fused_ordering(783) 00:19:43.375 fused_ordering(784) 00:19:43.375 fused_ordering(785) 00:19:43.375 fused_ordering(786) 00:19:43.375 fused_ordering(787) 00:19:43.375 fused_ordering(788) 00:19:43.375 fused_ordering(789) 00:19:43.375 fused_ordering(790) 00:19:43.375 fused_ordering(791) 00:19:43.375 fused_ordering(792) 00:19:43.375 fused_ordering(793) 00:19:43.375 fused_ordering(794) 00:19:43.375 fused_ordering(795) 00:19:43.375 fused_ordering(796) 00:19:43.375 fused_ordering(797) 00:19:43.375 fused_ordering(798) 00:19:43.375 fused_ordering(799) 00:19:43.375 fused_ordering(800) 00:19:43.375 fused_ordering(801) 00:19:43.375 fused_ordering(802) 00:19:43.375 fused_ordering(803) 00:19:43.375 fused_ordering(804) 00:19:43.375 fused_ordering(805) 00:19:43.375 fused_ordering(806) 00:19:43.375 fused_ordering(807) 00:19:43.375 fused_ordering(808) 00:19:43.375 fused_ordering(809) 00:19:43.375 fused_ordering(810) 00:19:43.375 fused_ordering(811) 00:19:43.375 fused_ordering(812) 00:19:43.375 fused_ordering(813) 00:19:43.375 fused_ordering(814) 00:19:43.375 fused_ordering(815) 00:19:43.375 fused_ordering(816) 00:19:43.375 fused_ordering(817) 00:19:43.375 fused_ordering(818) 00:19:43.375 fused_ordering(819) 00:19:43.375 fused_ordering(820) 00:19:43.943 fused_ordering(821) 00:19:43.943 fused_ordering(822) 00:19:43.943 fused_ordering(823) 00:19:43.943 fused_ordering(824) 00:19:43.943 fused_ordering(825) 00:19:43.943 fused_ordering(826) 00:19:43.943 fused_ordering(827) 00:19:43.943 fused_ordering(828) 00:19:43.943 fused_ordering(829) 00:19:43.943 fused_ordering(830) 00:19:43.943 fused_ordering(831) 00:19:43.943 fused_ordering(832) 00:19:43.943 fused_ordering(833) 00:19:43.943 fused_ordering(834) 00:19:43.943 fused_ordering(835) 00:19:43.943 fused_ordering(836) 00:19:43.943 fused_ordering(837) 00:19:43.943 fused_ordering(838) 00:19:43.943 fused_ordering(839) 00:19:43.943 fused_ordering(840) 00:19:43.943 fused_ordering(841) 00:19:43.943 fused_ordering(842) 00:19:43.943 fused_ordering(843) 00:19:43.943 fused_ordering(844) 00:19:43.943 fused_ordering(845) 00:19:43.943 fused_ordering(846) 00:19:43.943 fused_ordering(847) 00:19:43.943 fused_ordering(848) 00:19:43.943 fused_ordering(849) 00:19:43.943 fused_ordering(850) 00:19:43.943 fused_ordering(851) 00:19:43.943 fused_ordering(852) 00:19:43.943 fused_ordering(853) 00:19:43.943 fused_ordering(854) 00:19:43.943 fused_ordering(855) 00:19:43.943 fused_ordering(856) 00:19:43.943 fused_ordering(857) 00:19:43.943 fused_ordering(858) 00:19:43.943 fused_ordering(859) 00:19:43.943 fused_ordering(860) 00:19:43.943 fused_ordering(861) 00:19:43.943 fused_ordering(862) 00:19:43.943 fused_ordering(863) 00:19:43.943 fused_ordering(864) 00:19:43.943 fused_ordering(865) 00:19:43.943 fused_ordering(866) 00:19:43.943 fused_ordering(867) 00:19:43.943 fused_ordering(868) 00:19:43.943 fused_ordering(869) 00:19:43.943 fused_ordering(870) 00:19:43.943 fused_ordering(871) 00:19:43.943 fused_ordering(872) 00:19:43.943 fused_ordering(873) 00:19:43.943 fused_ordering(874) 00:19:43.943 fused_ordering(875) 00:19:43.943 fused_ordering(876) 00:19:43.943 fused_ordering(877) 00:19:43.943 fused_ordering(878) 00:19:43.943 fused_ordering(879) 00:19:43.943 fused_ordering(880) 00:19:43.943 fused_ordering(881) 00:19:43.943 fused_ordering(882) 00:19:43.943 fused_ordering(883) 00:19:43.943 fused_ordering(884) 00:19:43.943 fused_ordering(885) 00:19:43.943 fused_ordering(886) 00:19:43.943 fused_ordering(887) 00:19:43.943 fused_ordering(888) 00:19:43.943 fused_ordering(889) 00:19:43.943 fused_ordering(890) 00:19:43.943 fused_ordering(891) 00:19:43.943 fused_ordering(892) 00:19:43.943 fused_ordering(893) 00:19:43.943 fused_ordering(894) 00:19:43.943 fused_ordering(895) 00:19:43.943 fused_ordering(896) 00:19:43.943 fused_ordering(897) 00:19:43.943 fused_ordering(898) 00:19:43.943 fused_ordering(899) 00:19:43.943 fused_ordering(900) 00:19:43.943 fused_ordering(901) 00:19:43.943 fused_ordering(902) 00:19:43.943 fused_ordering(903) 00:19:43.943 fused_ordering(904) 00:19:43.943 fused_ordering(905) 00:19:43.943 fused_ordering(906) 00:19:43.943 fused_ordering(907) 00:19:43.943 fused_ordering(908) 00:19:43.943 fused_ordering(909) 00:19:43.943 fused_ordering(910) 00:19:43.943 fused_ordering(911) 00:19:43.943 fused_ordering(912) 00:19:43.943 fused_ordering(913) 00:19:43.943 fused_ordering(914) 00:19:43.943 fused_ordering(915) 00:19:43.943 fused_ordering(916) 00:19:43.943 fused_ordering(917) 00:19:43.943 fused_ordering(918) 00:19:43.943 fused_ordering(919) 00:19:43.943 fused_ordering(920) 00:19:43.943 fused_ordering(921) 00:19:43.943 fused_ordering(922) 00:19:43.943 fused_ordering(923) 00:19:43.943 fused_ordering(924) 00:19:43.943 fused_ordering(925) 00:19:43.943 fused_ordering(926) 00:19:43.943 fused_ordering(927) 00:19:43.943 fused_ordering(928) 00:19:43.943 fused_ordering(929) 00:19:43.943 fused_ordering(930) 00:19:43.943 fused_ordering(931) 00:19:43.943 fused_ordering(932) 00:19:43.943 fused_ordering(933) 00:19:43.943 fused_ordering(934) 00:19:43.943 fused_ordering(935) 00:19:43.943 fused_ordering(936) 00:19:43.943 fused_ordering(937) 00:19:43.943 fused_ordering(938) 00:19:43.943 fused_ordering(939) 00:19:43.943 fused_ordering(940) 00:19:43.943 fused_ordering(941) 00:19:43.943 fused_ordering(942) 00:19:43.943 fused_ordering(943) 00:19:43.943 fused_ordering(944) 00:19:43.943 fused_ordering(945) 00:19:43.943 fused_ordering(946) 00:19:43.943 fused_ordering(947) 00:19:43.943 fused_ordering(948) 00:19:43.943 fused_ordering(949) 00:19:43.943 fused_ordering(950) 00:19:43.943 fused_ordering(951) 00:19:43.943 fused_ordering(952) 00:19:43.943 fused_ordering(953) 00:19:43.943 fused_ordering(954) 00:19:43.943 fused_ordering(955) 00:19:43.943 fused_ordering(956) 00:19:43.943 fused_ordering(957) 00:19:43.943 fused_ordering(958) 00:19:43.943 fused_ordering(959) 00:19:43.943 fused_ordering(960) 00:19:43.943 fused_ordering(961) 00:19:43.943 fused_ordering(962) 00:19:43.943 fused_ordering(963) 00:19:43.943 fused_ordering(964) 00:19:43.943 fused_ordering(965) 00:19:43.944 fused_ordering(966) 00:19:43.944 fused_ordering(967) 00:19:43.944 fused_ordering(968) 00:19:43.944 fused_ordering(969) 00:19:43.944 fused_ordering(970) 00:19:43.944 fused_ordering(971) 00:19:43.944 fused_ordering(972) 00:19:43.944 fused_ordering(973) 00:19:43.944 fused_ordering(974) 00:19:43.944 fused_ordering(975) 00:19:43.944 fused_ordering(976) 00:19:43.944 fused_ordering(977) 00:19:43.944 fused_ordering(978) 00:19:43.944 fused_ordering(979) 00:19:43.944 fused_ordering(980) 00:19:43.944 fused_ordering(981) 00:19:43.944 fused_ordering(982) 00:19:43.944 fused_ordering(983) 00:19:43.944 fused_ordering(984) 00:19:43.944 fused_ordering(985) 00:19:43.944 fused_ordering(986) 00:19:43.944 fused_ordering(987) 00:19:43.944 fused_ordering(988) 00:19:43.944 fused_ordering(989) 00:19:43.944 fused_ordering(990) 00:19:43.944 fused_ordering(991) 00:19:43.944 fused_ordering(992) 00:19:43.944 fused_ordering(993) 00:19:43.944 fused_ordering(994) 00:19:43.944 fused_ordering(995) 00:19:43.944 fused_ordering(996) 00:19:43.944 fused_ordering(997) 00:19:43.944 fused_ordering(998) 00:19:43.944 fused_ordering(999) 00:19:43.944 fused_ordering(1000) 00:19:43.944 fused_ordering(1001) 00:19:43.944 fused_ordering(1002) 00:19:43.944 fused_ordering(1003) 00:19:43.944 fused_ordering(1004) 00:19:43.944 fused_ordering(1005) 00:19:43.944 fused_ordering(1006) 00:19:43.944 fused_ordering(1007) 00:19:43.944 fused_ordering(1008) 00:19:43.944 fused_ordering(1009) 00:19:43.944 fused_ordering(1010) 00:19:43.944 fused_ordering(1011) 00:19:43.944 fused_ordering(1012) 00:19:43.944 fused_ordering(1013) 00:19:43.944 fused_ordering(1014) 00:19:43.944 fused_ordering(1015) 00:19:43.944 fused_ordering(1016) 00:19:43.944 fused_ordering(1017) 00:19:43.944 fused_ordering(1018) 00:19:43.944 fused_ordering(1019) 00:19:43.944 fused_ordering(1020) 00:19:43.944 fused_ordering(1021) 00:19:43.944 fused_ordering(1022) 00:19:43.944 fused_ordering(1023) 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:43.944 rmmod nvme_tcp 00:19:43.944 rmmod nvme_fabrics 00:19:43.944 rmmod nvme_keyring 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 88209 ']' 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 88209 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 88209 ']' 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 88209 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88209 00:19:43.944 killing process with pid 88209 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88209' 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 88209 00:19:43.944 [2024-05-15 13:37:56.999284] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:43.944 13:37:56 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 88209 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.202 13:37:57 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:44.202 00:19:44.202 real 0m4.163s 00:19:44.202 user 0m4.983s 00:19:44.202 sys 0m1.431s 00:19:44.203 13:37:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:44.203 ************************************ 00:19:44.203 END TEST nvmf_fused_ordering 00:19:44.203 ************************************ 00:19:44.203 13:37:57 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:44.461 13:37:57 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:19:44.461 13:37:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:44.461 13:37:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:44.461 13:37:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:44.461 ************************************ 00:19:44.461 START TEST nvmf_delete_subsystem 00:19:44.461 ************************************ 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:19:44.461 * Looking for test storage... 00:19:44.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.461 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:44.462 Cannot find device "nvmf_tgt_br" 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:44.462 Cannot find device "nvmf_tgt_br2" 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:44.462 Cannot find device "nvmf_tgt_br" 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:44.462 Cannot find device "nvmf_tgt_br2" 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:19:44.462 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:44.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:44.720 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:44.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:19:44.720 00:19:44.720 --- 10.0.0.2 ping statistics --- 00:19:44.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.720 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:44.720 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:44.720 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:19:44.720 00:19:44.720 --- 10.0.0.3 ping statistics --- 00:19:44.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.720 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:44.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:44.720 00:19:44.720 --- 10.0.0.1 ping statistics --- 00:19:44.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.720 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:44.720 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:44.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=88473 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 88473 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 88473 ']' 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:44.978 13:37:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:44.978 [2024-05-15 13:37:57.899500] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:44.978 [2024-05-15 13:37:57.900002] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.978 [2024-05-15 13:37:58.026403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:44.979 [2024-05-15 13:37:58.043717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:45.237 [2024-05-15 13:37:58.185723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.237 [2024-05-15 13:37:58.185812] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.237 [2024-05-15 13:37:58.185832] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.237 [2024-05-15 13:37:58.185843] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.237 [2024-05-15 13:37:58.185854] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.237 [2024-05-15 13:37:58.186048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.237 [2024-05-15 13:37:58.186241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.802 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:45.802 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:19:45.802 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.802 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.802 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 [2024-05-15 13:37:58.917361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 [2024-05-15 13:37:58.934008] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:46.058 [2024-05-15 13:37:58.934305] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 NULL1 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 Delay0 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=88525 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:19:46.058 13:37:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:19:46.058 [2024-05-15 13:37:59.138279] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:47.956 13:38:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:47.956 13:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.956 13:38:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 starting I/O failed: -6 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 [2024-05-15 13:38:01.189317] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f73bc00c470 is same with the state(5) to be set 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Write completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.213 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 starting I/O failed: -6 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 [2024-05-15 13:38:01.190818] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9820 is same with the state(5) to be set 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Write completed with error (sct=0, sc=8) 00:19:48.214 Read completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Write completed with error (sct=0, sc=8) 00:19:48.215 Read completed with error (sct=0, sc=8) 00:19:49.148 [2024-05-15 13:38:02.155738] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d2180 is same with the state(5) to be set 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 [2024-05-15 13:38:02.191016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f73bc00bfe0 is same with the state(5) to be set 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 [2024-05-15 13:38:02.192597] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f73bc00c780 is same with the state(5) to be set 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 [2024-05-15 13:38:02.193519] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9a00 is same with the state(5) to be set 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Read completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 Write completed with error (sct=0, sc=8) 00:19:49.148 [2024-05-15 13:38:02.194382] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9640 is same with the state(5) to be set 00:19:49.148 Initializing NVMe Controllers 00:19:49.148 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:49.148 Controller IO queue size 128, less than required. 00:19:49.148 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:49.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:19:49.148 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:19:49.148 Initialization complete. Launching workers. 00:19:49.148 ======================================================== 00:19:49.148 Latency(us) 00:19:49.148 Device Information : IOPS MiB/s Average min max 00:19:49.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.75 0.08 925053.00 411.38 1018106.97 00:19:49.148 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 165.18 0.08 906487.22 746.95 1017977.21 00:19:49.148 ======================================================== 00:19:49.148 Total : 323.93 0.16 915585.87 411.38 1018106.97 00:19:49.148 00:19:49.148 [2024-05-15 13:38:02.195113] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d2180 (9): Bad file descriptor 00:19:49.148 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:49.148 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.148 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:19:49.148 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 88525 00:19:49.148 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 88525 00:19:49.715 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (88525) - No such process 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 88525 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 88525 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 88525 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:19:49.715 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:49.716 [2024-05-15 13:38:02.723696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=88571 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:49.716 13:38:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:49.973 [2024-05-15 13:38:02.903873] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:50.233 13:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:50.233 13:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:50.233 13:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:50.799 13:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:50.799 13:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:50.799 13:38:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:51.364 13:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:51.364 13:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:51.364 13:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:51.930 13:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:51.930 13:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:51.930 13:38:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:52.189 13:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:52.189 13:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:52.189 13:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:52.758 13:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:52.758 13:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:52.758 13:38:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:19:53.015 Initializing NVMe Controllers 00:19:53.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:53.015 Controller IO queue size 128, less than required. 00:19:53.015 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:53.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:19:53.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:19:53.015 Initialization complete. Launching workers. 00:19:53.015 ======================================================== 00:19:53.015 Latency(us) 00:19:53.015 Device Information : IOPS MiB/s Average min max 00:19:53.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003038.96 1000160.58 1008151.36 00:19:53.015 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004860.03 1000402.03 1012278.35 00:19:53.015 ======================================================== 00:19:53.015 Total : 256.00 0.12 1003949.49 1000160.58 1012278.35 00:19:53.015 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 88571 00:19:53.273 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (88571) - No such process 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 88571 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.273 rmmod nvme_tcp 00:19:53.273 rmmod nvme_fabrics 00:19:53.273 rmmod nvme_keyring 00:19:53.273 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 88473 ']' 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 88473 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 88473 ']' 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 88473 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88473 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:53.531 killing process with pid 88473 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88473' 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 88473 00:19:53.531 [2024-05-15 13:38:06.400070] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 88473 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.531 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.789 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:53.789 00:19:53.789 real 0m9.336s 00:19:53.789 user 0m28.704s 00:19:53.789 sys 0m1.501s 00:19:53.789 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:53.789 13:38:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:19:53.789 ************************************ 00:19:53.789 END TEST nvmf_delete_subsystem 00:19:53.789 ************************************ 00:19:53.789 13:38:06 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:53.789 13:38:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:53.790 13:38:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:53.790 13:38:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:53.790 ************************************ 00:19:53.790 START TEST nvmf_ns_masking 00:19:53.790 ************************************ 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:53.790 * Looking for test storage... 00:19:53.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=94649963-be8e-420b-a7ac-5be4644676db 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:53.790 Cannot find device "nvmf_tgt_br" 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:53.790 Cannot find device "nvmf_tgt_br2" 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:53.790 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:54.049 Cannot find device "nvmf_tgt_br" 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:54.049 Cannot find device "nvmf_tgt_br2" 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:54.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:54.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:54.049 13:38:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:54.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:19:54.049 00:19:54.049 --- 10.0.0.2 ping statistics --- 00:19:54.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.049 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:54.049 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:54.049 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:19:54.049 00:19:54.049 --- 10.0.0.3 ping statistics --- 00:19:54.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.049 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:54.049 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:54.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:19:54.307 00:19:54.307 --- 10.0.0.1 ping statistics --- 00:19:54.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.307 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=88808 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 88808 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 88808 ']' 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:54.307 13:38:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:54.307 [2024-05-15 13:38:07.239990] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:19:54.307 [2024-05-15 13:38:07.240102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:54.307 [2024-05-15 13:38:07.367044] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:54.307 [2024-05-15 13:38:07.386118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.565 [2024-05-15 13:38:07.490594] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.565 [2024-05-15 13:38:07.490658] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.565 [2024-05-15 13:38:07.490672] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.565 [2024-05-15 13:38:07.490682] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.565 [2024-05-15 13:38:07.490692] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.565 [2024-05-15 13:38:07.490821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.565 [2024-05-15 13:38:07.491117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.565 [2024-05-15 13:38:07.491722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.565 [2024-05-15 13:38:07.491831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:55.497 [2024-05-15 13:38:08.544511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:19:55.497 13:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:56.062 Malloc1 00:19:56.062 13:38:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:56.062 Malloc2 00:19:56.062 13:38:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:56.320 13:38:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:56.577 13:38:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:56.835 [2024-05-15 13:38:09.834587] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:56.835 [2024-05-15 13:38:09.834969] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.835 13:38:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:19:56.835 13:38:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94649963-be8e-420b-a7ac-5be4644676db -a 10.0.0.2 -s 4420 -i 4 00:19:57.092 13:38:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:19:57.092 13:38:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:19:57.092 13:38:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:57.092 13:38:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:57.092 13:38:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:19:59.053 13:38:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:19:59.053 [ 0]:0x1 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a0ea2a0fa4c43bab3cefa968f325847 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a0ea2a0fa4c43bab3cefa968f325847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:59.053 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:59.311 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:19:59.311 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:19:59.311 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:59.311 [ 0]:0x1 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a0ea2a0fa4c43bab3cefa968f325847 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a0ea2a0fa4c43bab3cefa968f325847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:19:59.570 [ 1]:0x2 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=407e53a8a2f84f5981d4754dbf6ff0fd 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 407e53a8a2f84f5981d4754dbf6ff0fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:59.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:59.570 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:59.827 13:38:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94649963-be8e-420b-a7ac-5be4644676db -a 10.0.0.2 -s 4420 -i 4 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:20:00.085 13:38:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:20:02.652 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:20:02.652 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:20:02.652 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:02.653 [ 0]:0x2 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=407e53a8a2f84f5981d4754dbf6ff0fd 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 407e53a8a2f84f5981d4754dbf6ff0fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:20:02.653 [ 0]:0x1 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a0ea2a0fa4c43bab3cefa968f325847 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a0ea2a0fa4c43bab3cefa968f325847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:20:02.653 [ 1]:0x2 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:02.653 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=407e53a8a2f84f5981d4754dbf6ff0fd 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 407e53a8a2f84f5981d4754dbf6ff0fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:02.911 13:38:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:02.911 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:02.911 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:02.911 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:20:02.911 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:02.912 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:20:03.171 [ 0]:0x2 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=407e53a8a2f84f5981d4754dbf6ff0fd 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 407e53a8a2f84f5981d4754dbf6ff0fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:03.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:03.171 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:03.429 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:20:03.429 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94649963-be8e-420b-a7ac-5be4644676db -a 10.0.0.2 -s 4420 -i 4 00:20:03.687 13:38:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:03.687 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:20:03.687 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:20:03.687 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:20:03.687 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:20:03.687 13:38:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:20:05.586 [ 0]:0x1 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:05.586 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=9a0ea2a0fa4c43bab3cefa968f325847 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 9a0ea2a0fa4c43bab3cefa968f325847 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:20:05.845 [ 1]:0x2 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=407e53a8a2f84f5981d4754dbf6ff0fd 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 407e53a8a2f84f5981d4754dbf6ff0fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:05.845 13:38:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:06.104 [ 0]:0x2 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=407e53a8a2f84f5981d4754dbf6ff0fd 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 407e53a8a2f84f5981d4754dbf6ff0fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.104 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.105 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:06.105 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:06.363 [2024-05-15 13:38:19.373857] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:06.363 2024/05/15 13:38:19 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:20:06.363 request: 00:20:06.363 { 00:20:06.363 "method": "nvmf_ns_remove_host", 00:20:06.363 "params": { 00:20:06.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:06.363 "nsid": 2, 00:20:06.363 "host": "nqn.2016-06.io.spdk:host1" 00:20:06.363 } 00:20:06.363 } 00:20:06.363 Got JSON-RPC error response 00:20:06.363 GoRPCClient: error on JSON-RPC call 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:20:06.363 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:20:06.621 [ 0]:0x2 00:20:06.621 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:06.621 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:20:06.621 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=407e53a8a2f84f5981d4754dbf6ff0fd 00:20:06.621 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 407e53a8a2f84f5981d4754dbf6ff0fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:06.621 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:20:06.621 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.621 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.879 rmmod nvme_tcp 00:20:06.879 rmmod nvme_fabrics 00:20:06.879 rmmod nvme_keyring 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:20:06.879 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 88808 ']' 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 88808 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 88808 ']' 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 88808 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88808 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:06.880 killing process with pid 88808 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88808' 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 88808 00:20:06.880 [2024-05-15 13:38:19.925759] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:06.880 13:38:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 88808 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.138 13:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.396 13:38:20 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:07.396 00:20:07.396 real 0m13.534s 00:20:07.396 user 0m54.256s 00:20:07.396 sys 0m2.140s 00:20:07.396 13:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:07.397 13:38:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:07.397 ************************************ 00:20:07.397 END TEST nvmf_ns_masking 00:20:07.397 ************************************ 00:20:07.397 13:38:20 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:20:07.397 13:38:20 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:20:07.397 13:38:20 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:07.397 13:38:20 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:07.397 13:38:20 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:07.397 13:38:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.397 ************************************ 00:20:07.397 START TEST nvmf_host_management 00:20:07.397 ************************************ 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:20:07.397 * Looking for test storage... 00:20:07.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:07.397 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:07.398 Cannot find device "nvmf_tgt_br" 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:07.398 Cannot find device "nvmf_tgt_br2" 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:07.398 Cannot find device "nvmf_tgt_br" 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:07.398 Cannot find device "nvmf_tgt_br2" 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:20:07.398 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:07.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:07.656 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:07.656 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:07.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:07.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:20:07.657 00:20:07.657 --- 10.0.0.2 ping statistics --- 00:20:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.657 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:20:07.657 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:07.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:07.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:20:07.657 00:20:07.657 --- 10.0.0.3 ping statistics --- 00:20:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.657 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:07.657 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:07.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:07.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:20:07.915 00:20:07.915 --- 10.0.0.1 ping statistics --- 00:20:07.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:07.915 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=89362 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 89362 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 89362 ']' 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:07.915 13:38:20 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:07.915 [2024-05-15 13:38:20.843623] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:07.915 [2024-05-15 13:38:20.843735] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.915 [2024-05-15 13:38:20.971186] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:07.915 [2024-05-15 13:38:20.986001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.174 [2024-05-15 13:38:21.082811] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.174 [2024-05-15 13:38:21.082867] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.174 [2024-05-15 13:38:21.082879] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.174 [2024-05-15 13:38:21.082888] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.174 [2024-05-15 13:38:21.082895] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.174 [2024-05-15 13:38:21.083025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.174 [2024-05-15 13:38:21.084160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.174 [2024-05-15 13:38:21.084199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:08.174 [2024-05-15 13:38:21.084209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.109 [2024-05-15 13:38:21.919896] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.109 13:38:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.109 Malloc0 00:20:09.109 [2024-05-15 13:38:21.996364] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:09.109 [2024-05-15 13:38:21.996878] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=89440 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 89440 /var/tmp/bdevperf.sock 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 89440 ']' 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:09.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:09.109 { 00:20:09.109 "params": { 00:20:09.109 "name": "Nvme$subsystem", 00:20:09.109 "trtype": "$TEST_TRANSPORT", 00:20:09.109 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:09.109 "adrfam": "ipv4", 00:20:09.109 "trsvcid": "$NVMF_PORT", 00:20:09.109 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:09.109 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:09.109 "hdgst": ${hdgst:-false}, 00:20:09.109 "ddgst": ${ddgst:-false} 00:20:09.109 }, 00:20:09.109 "method": "bdev_nvme_attach_controller" 00:20:09.109 } 00:20:09.109 EOF 00:20:09.109 )") 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:20:09.109 13:38:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:09.109 "params": { 00:20:09.109 "name": "Nvme0", 00:20:09.109 "trtype": "tcp", 00:20:09.109 "traddr": "10.0.0.2", 00:20:09.109 "adrfam": "ipv4", 00:20:09.109 "trsvcid": "4420", 00:20:09.109 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:09.109 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:09.109 "hdgst": false, 00:20:09.109 "ddgst": false 00:20:09.109 }, 00:20:09.109 "method": "bdev_nvme_attach_controller" 00:20:09.109 }' 00:20:09.109 [2024-05-15 13:38:22.102175] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:09.110 [2024-05-15 13:38:22.102912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89440 ] 00:20:09.368 [2024-05-15 13:38:22.223138] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:09.368 [2024-05-15 13:38:22.240785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.368 [2024-05-15 13:38:22.340873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.629 Running I/O for 10 seconds... 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1027 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1027 -ge 100 ']' 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.207 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.479 [2024-05-15 13:38:23.306934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.479 [2024-05-15 13:38:23.306986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.479 [2024-05-15 13:38:23.306998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307196] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307528] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307536] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307575] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307594] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcef0 is same with the state(5) to be set 00:20:10.480 [2024-05-15 13:38:23.307800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.480 [2024-05-15 13:38:23.307852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.480 [2024-05-15 13:38:23.307877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.480 [2024-05-15 13:38:23.307888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.480 [2024-05-15 13:38:23.307900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.480 [2024-05-15 13:38:23.307910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.480 [2024-05-15 13:38:23.307921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.480 [2024-05-15 13:38:23.307931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.480 [2024-05-15 13:38:23.307942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.480 [2024-05-15 13:38:23.307951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.480 [2024-05-15 13:38:23.307963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.480 [2024-05-15 13:38:23.307972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.307983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.307993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.481 [2024-05-15 13:38:23.308557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.481 [2024-05-15 13:38:23.308567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.308988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.308999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.482 [2024-05-15 13:38:23.309323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.482 [2024-05-15 13:38:23.309347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.483 [2024-05-15 13:38:23.309357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.483 [2024-05-15 13:38:23.309366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.483 [2024-05-15 13:38:23.309375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1282310 is same with the state(5) to be set 00:20:10.483 [2024-05-15 13:38:23.309440] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1282310 was disconnected and freed. reset controller. 00:20:10.483 [2024-05-15 13:38:23.309522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.483 [2024-05-15 13:38:23.309537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.483 [2024-05-15 13:38:23.309547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.483 [2024-05-15 13:38:23.309555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.483 [2024-05-15 13:38:23.309565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.483 [2024-05-15 13:38:23.309573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.483 [2024-05-15 13:38:23.309582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.483 [2024-05-15 13:38:23.309590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.483 [2024-05-15 13:38:23.309598] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd12a0 is same with the state(5) to be set 00:20:10.483 [2024-05-15 13:38:23.310749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:10.483 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.483 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:20:10.483 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.483 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:10.483 task offset: 8192 on job bdev=Nvme0n1 fails 00:20:10.483 00:20:10.483 Latency(us) 00:20:10.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.483 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:10.483 Job: Nvme0n1 ended in about 0.79 seconds with error 00:20:10.483 Verification LBA range: start 0x0 length 0x400 00:20:10.483 Nvme0n1 : 0.79 1376.19 86.01 80.95 0.00 42948.00 6494.02 42181.35 00:20:10.483 =================================================================================================================== 00:20:10.483 Total : 1376.19 86.01 80.95 0.00 42948.00 6494.02 42181.35 00:20:10.483 [2024-05-15 13:38:23.312549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:10.483 [2024-05-15 13:38:23.312570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd12a0 (9): Bad file descriptor 00:20:10.483 13:38:23 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.483 13:38:23 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:20:10.483 [2024-05-15 13:38:23.319883] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 89440 00:20:11.421 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (89440) - No such process 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.421 { 00:20:11.421 "params": { 00:20:11.421 "name": "Nvme$subsystem", 00:20:11.421 "trtype": "$TEST_TRANSPORT", 00:20:11.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.421 "adrfam": "ipv4", 00:20:11.421 "trsvcid": "$NVMF_PORT", 00:20:11.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.421 "hdgst": ${hdgst:-false}, 00:20:11.421 "ddgst": ${ddgst:-false} 00:20:11.421 }, 00:20:11.421 "method": "bdev_nvme_attach_controller" 00:20:11.421 } 00:20:11.421 EOF 00:20:11.421 )") 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:20:11.421 13:38:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:11.421 "params": { 00:20:11.421 "name": "Nvme0", 00:20:11.421 "trtype": "tcp", 00:20:11.421 "traddr": "10.0.0.2", 00:20:11.421 "adrfam": "ipv4", 00:20:11.422 "trsvcid": "4420", 00:20:11.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.422 "hdgst": false, 00:20:11.422 "ddgst": false 00:20:11.422 }, 00:20:11.422 "method": "bdev_nvme_attach_controller" 00:20:11.422 }' 00:20:11.422 [2024-05-15 13:38:24.387007] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:11.422 [2024-05-15 13:38:24.387113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89490 ] 00:20:11.422 [2024-05-15 13:38:24.510594] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:11.680 [2024-05-15 13:38:24.529625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.680 [2024-05-15 13:38:24.634186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.938 Running I/O for 1 seconds... 00:20:12.872 00:20:12.872 Latency(us) 00:20:12.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.872 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:12.872 Verification LBA range: start 0x0 length 0x400 00:20:12.872 Nvme0n1 : 1.01 1455.22 90.95 0.00 0.00 43119.33 5808.87 41943.04 00:20:12.872 =================================================================================================================== 00:20:12.872 Total : 1455.22 90.95 0.00 0.00 43119.33 5808.87 41943.04 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:13.131 rmmod nvme_tcp 00:20:13.131 rmmod nvme_fabrics 00:20:13.131 rmmod nvme_keyring 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 89362 ']' 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 89362 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 89362 ']' 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 89362 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89362 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89362' 00:20:13.131 killing process with pid 89362 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 89362 00:20:13.131 [2024-05-15 13:38:26.162924] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:13.131 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 89362 00:20:13.389 [2024-05-15 13:38:26.384726] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:13.389 ************************************ 00:20:13.389 END TEST nvmf_host_management 00:20:13.389 ************************************ 00:20:13.389 00:20:13.389 real 0m6.153s 00:20:13.389 user 0m24.102s 00:20:13.389 sys 0m1.548s 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:13.389 13:38:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:20:13.648 13:38:26 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:13.648 13:38:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:13.648 13:38:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:13.648 13:38:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:13.648 ************************************ 00:20:13.648 START TEST nvmf_lvol 00:20:13.648 ************************************ 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:20:13.648 * Looking for test storage... 00:20:13.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.648 13:38:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:13.649 Cannot find device "nvmf_tgt_br" 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:13.649 Cannot find device "nvmf_tgt_br2" 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:13.649 Cannot find device "nvmf_tgt_br" 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:13.649 Cannot find device "nvmf_tgt_br2" 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:20:13.649 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:13.907 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:13.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:13.908 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:13.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:20:13.908 00:20:13.908 --- 10.0.0.2 ping statistics --- 00:20:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.908 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:13.908 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:13.908 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:13.908 00:20:13.908 --- 10.0.0.3 ping statistics --- 00:20:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.908 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:13.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:20:13.908 00:20:13.908 --- 10.0.0.1 ping statistics --- 00:20:13.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.908 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=89697 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 89697 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 89697 ']' 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:13.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:13.908 13:38:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:14.166 [2024-05-15 13:38:27.024355] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:14.166 [2024-05-15 13:38:27.024421] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.166 [2024-05-15 13:38:27.143848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:14.166 [2024-05-15 13:38:27.162739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.166 [2024-05-15 13:38:27.261503] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.166 [2024-05-15 13:38:27.261572] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.166 [2024-05-15 13:38:27.261584] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.166 [2024-05-15 13:38:27.261593] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.166 [2024-05-15 13:38:27.261614] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.166 [2024-05-15 13:38:27.261781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.424 [2024-05-15 13:38:27.261979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.424 [2024-05-15 13:38:27.261986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.989 13:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:14.989 13:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:20:14.989 13:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.989 13:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:14.989 13:38:28 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:14.989 13:38:28 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.989 13:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:15.247 [2024-05-15 13:38:28.337134] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.505 13:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.763 13:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:20:15.763 13:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:16.020 13:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:20:16.020 13:38:28 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:20:16.335 13:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:20:16.592 13:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c6f165ad-4b03-4963-a694-181680a8cdc1 00:20:16.592 13:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c6f165ad-4b03-4963-a694-181680a8cdc1 lvol 20 00:20:16.850 13:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=686b98ba-82cf-4a20-9085-2a420cddc016 00:20:16.850 13:38:29 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:17.108 13:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 686b98ba-82cf-4a20-9085-2a420cddc016 00:20:17.383 13:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:17.640 [2024-05-15 13:38:30.595289] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:17.640 [2024-05-15 13:38:30.595800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.640 13:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:17.898 13:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:20:17.898 13:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=89849 00:20:17.898 13:38:30 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:20:18.831 13:38:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 686b98ba-82cf-4a20-9085-2a420cddc016 MY_SNAPSHOT 00:20:19.396 13:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f9f9d08c-9630-4ef6-ba32-6db36357628b 00:20:19.396 13:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 686b98ba-82cf-4a20-9085-2a420cddc016 30 00:20:19.653 13:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f9f9d08c-9630-4ef6-ba32-6db36357628b MY_CLONE 00:20:19.911 13:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8160862d-78cb-4279-83eb-f91e163a3ab5 00:20:19.911 13:38:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 8160862d-78cb-4279-83eb-f91e163a3ab5 00:20:20.844 13:38:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 89849 00:20:29.000 Initializing NVMe Controllers 00:20:29.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:29.000 Controller IO queue size 128, less than required. 00:20:29.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:29.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:20:29.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:20:29.000 Initialization complete. Launching workers. 00:20:29.000 ======================================================== 00:20:29.000 Latency(us) 00:20:29.000 Device Information : IOPS MiB/s Average min max 00:20:29.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 6918.50 27.03 18523.62 3413.68 160875.35 00:20:29.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 6529.50 25.51 19613.62 3304.15 64993.09 00:20:29.000 ======================================================== 00:20:29.000 Total : 13448.00 52.53 19052.85 3304.15 160875.35 00:20:29.000 00:20:29.000 13:38:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.000 13:38:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 686b98ba-82cf-4a20-9085-2a420cddc016 00:20:29.000 13:38:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c6f165ad-4b03-4963-a694-181680a8cdc1 00:20:29.000 13:38:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:20:29.000 13:38:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:20:29.000 13:38:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:20:29.000 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:29.000 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:29.307 rmmod nvme_tcp 00:20:29.307 rmmod nvme_fabrics 00:20:29.307 rmmod nvme_keyring 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 89697 ']' 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 89697 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 89697 ']' 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 89697 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89697 00:20:29.307 killing process with pid 89697 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89697' 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 89697 00:20:29.307 [2024-05-15 13:38:42.204362] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:29.307 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 89697 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:29.565 00:20:29.565 real 0m16.002s 00:20:29.565 user 1m6.873s 00:20:29.565 sys 0m3.977s 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:20:29.565 ************************************ 00:20:29.565 END TEST nvmf_lvol 00:20:29.565 ************************************ 00:20:29.565 13:38:42 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:29.565 13:38:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:29.565 13:38:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:29.565 13:38:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:29.565 ************************************ 00:20:29.565 START TEST nvmf_lvs_grow 00:20:29.565 ************************************ 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:20:29.565 * Looking for test storage... 00:20:29.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.565 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:29.824 Cannot find device "nvmf_tgt_br" 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.824 Cannot find device "nvmf_tgt_br2" 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:29.824 Cannot find device "nvmf_tgt_br" 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:29.824 Cannot find device "nvmf_tgt_br2" 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.824 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:29.824 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:30.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:30.083 00:20:30.083 --- 10.0.0.2 ping statistics --- 00:20:30.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.083 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:30.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:20:30.083 00:20:30.083 --- 10.0.0.3 ping statistics --- 00:20:30.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.083 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:20:30.083 13:38:42 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:20:30.083 00:20:30.083 --- 10.0.0.1 ping statistics --- 00:20:30.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.083 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=90211 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 90211 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 90211 ']' 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:30.083 13:38:43 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:30.083 [2024-05-15 13:38:43.090320] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:30.083 [2024-05-15 13:38:43.090710] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.342 [2024-05-15 13:38:43.216084] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:30.342 [2024-05-15 13:38:43.235738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.342 [2024-05-15 13:38:43.332056] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.342 [2024-05-15 13:38:43.332130] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.342 [2024-05-15 13:38:43.332158] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:30.342 [2024-05-15 13:38:43.332167] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:30.342 [2024-05-15 13:38:43.332174] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.342 [2024-05-15 13:38:43.332217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.274 13:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:31.274 13:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:20:31.274 13:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:31.274 13:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:31.274 13:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:31.274 13:38:44 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.274 13:38:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:31.531 [2024-05-15 13:38:44.408103] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:31.531 ************************************ 00:20:31.531 START TEST lvs_grow_clean 00:20:31.531 ************************************ 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:31.531 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:31.790 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:31.790 13:38:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:32.052 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d592506d-71a8-4851-9fc8-fa7378d47288 00:20:32.052 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:32.052 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:32.322 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:32.322 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:32.322 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d592506d-71a8-4851-9fc8-fa7378d47288 lvol 150 00:20:32.580 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=211a12f4-b679-4c62-9b22-a8f1f2376bf7 00:20:32.580 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:32.580 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:32.838 [2024-05-15 13:38:45.740496] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:32.838 [2024-05-15 13:38:45.740575] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:32.838 true 00:20:32.838 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:32.838 13:38:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:33.096 13:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:33.096 13:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:33.354 13:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 211a12f4-b679-4c62-9b22-a8f1f2376bf7 00:20:33.613 13:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:33.871 [2024-05-15 13:38:46.776889] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:33.871 [2024-05-15 13:38:46.777179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.871 13:38:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:34.129 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90378 00:20:34.129 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:34.129 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.129 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90378 /var/tmp/bdevperf.sock 00:20:34.129 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 90378 ']' 00:20:34.129 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.129 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:34.130 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.130 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:34.130 13:38:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:20:34.130 [2024-05-15 13:38:47.143741] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:34.130 [2024-05-15 13:38:47.143842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90378 ] 00:20:34.387 [2024-05-15 13:38:47.262202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:34.387 [2024-05-15 13:38:47.280081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.387 [2024-05-15 13:38:47.382745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.347 13:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:35.347 13:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:20:35.347 13:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:35.605 Nvme0n1 00:20:35.605 13:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:35.863 [ 00:20:35.863 { 00:20:35.863 "aliases": [ 00:20:35.863 "211a12f4-b679-4c62-9b22-a8f1f2376bf7" 00:20:35.863 ], 00:20:35.863 "assigned_rate_limits": { 00:20:35.863 "r_mbytes_per_sec": 0, 00:20:35.863 "rw_ios_per_sec": 0, 00:20:35.863 "rw_mbytes_per_sec": 0, 00:20:35.863 "w_mbytes_per_sec": 0 00:20:35.863 }, 00:20:35.863 "block_size": 4096, 00:20:35.863 "claimed": false, 00:20:35.863 "driver_specific": { 00:20:35.863 "mp_policy": "active_passive", 00:20:35.863 "nvme": [ 00:20:35.863 { 00:20:35.863 "ctrlr_data": { 00:20:35.863 "ana_reporting": false, 00:20:35.863 "cntlid": 1, 00:20:35.863 "firmware_revision": "24.05", 00:20:35.863 "model_number": "SPDK bdev Controller", 00:20:35.863 "multi_ctrlr": true, 00:20:35.863 "oacs": { 00:20:35.863 "firmware": 0, 00:20:35.863 "format": 0, 00:20:35.863 "ns_manage": 0, 00:20:35.863 "security": 0 00:20:35.863 }, 00:20:35.863 "serial_number": "SPDK0", 00:20:35.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:35.863 "vendor_id": "0x8086" 00:20:35.863 }, 00:20:35.863 "ns_data": { 00:20:35.863 "can_share": true, 00:20:35.863 "id": 1 00:20:35.863 }, 00:20:35.863 "trid": { 00:20:35.863 "adrfam": "IPv4", 00:20:35.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:35.863 "traddr": "10.0.0.2", 00:20:35.863 "trsvcid": "4420", 00:20:35.863 "trtype": "TCP" 00:20:35.863 }, 00:20:35.863 "vs": { 00:20:35.863 "nvme_version": "1.3" 00:20:35.863 } 00:20:35.863 } 00:20:35.863 ] 00:20:35.863 }, 00:20:35.863 "memory_domains": [ 00:20:35.863 { 00:20:35.863 "dma_device_id": "system", 00:20:35.863 "dma_device_type": 1 00:20:35.863 } 00:20:35.863 ], 00:20:35.863 "name": "Nvme0n1", 00:20:35.863 "num_blocks": 38912, 00:20:35.863 "product_name": "NVMe disk", 00:20:35.863 "supported_io_types": { 00:20:35.863 "abort": true, 00:20:35.863 "compare": true, 00:20:35.863 "compare_and_write": true, 00:20:35.863 "flush": true, 00:20:35.863 "nvme_admin": true, 00:20:35.863 "nvme_io": true, 00:20:35.863 "read": true, 00:20:35.863 "reset": true, 00:20:35.863 "unmap": true, 00:20:35.863 "write": true, 00:20:35.863 "write_zeroes": true 00:20:35.863 }, 00:20:35.863 "uuid": "211a12f4-b679-4c62-9b22-a8f1f2376bf7", 00:20:35.863 "zoned": false 00:20:35.863 } 00:20:35.863 ] 00:20:35.863 13:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.863 13:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90426 00:20:35.863 13:38:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:35.863 Running I/O for 10 seconds... 00:20:37.236 Latency(us) 00:20:37.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.236 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:37.236 Nvme0n1 : 1.00 8160.00 31.88 0.00 0.00 0.00 0.00 0.00 00:20:37.236 =================================================================================================================== 00:20:37.236 Total : 8160.00 31.88 0.00 0.00 0.00 0.00 0.00 00:20:37.236 00:20:37.801 13:38:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:38.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:38.058 Nvme0n1 : 2.00 8107.50 31.67 0.00 0.00 0.00 0.00 0.00 00:20:38.058 =================================================================================================================== 00:20:38.058 Total : 8107.50 31.67 0.00 0.00 0.00 0.00 0.00 00:20:38.058 00:20:38.058 true 00:20:38.058 13:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:38.058 13:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:38.316 13:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:38.316 13:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:38.316 13:38:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 90426 00:20:38.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:38.972 Nvme0n1 : 3.00 8135.00 31.78 0.00 0.00 0.00 0.00 0.00 00:20:38.972 =================================================================================================================== 00:20:38.972 Total : 8135.00 31.78 0.00 0.00 0.00 0.00 0.00 00:20:38.972 00:20:39.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:39.907 Nvme0n1 : 4.00 8135.25 31.78 0.00 0.00 0.00 0.00 0.00 00:20:39.907 =================================================================================================================== 00:20:39.907 Total : 8135.25 31.78 0.00 0.00 0.00 0.00 0.00 00:20:39.907 00:20:40.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:40.842 Nvme0n1 : 5.00 8140.40 31.80 0.00 0.00 0.00 0.00 0.00 00:20:40.842 =================================================================================================================== 00:20:40.842 Total : 8140.40 31.80 0.00 0.00 0.00 0.00 0.00 00:20:40.842 00:20:42.216 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:42.216 Nvme0n1 : 6.00 8118.50 31.71 0.00 0.00 0.00 0.00 0.00 00:20:42.216 =================================================================================================================== 00:20:42.216 Total : 8118.50 31.71 0.00 0.00 0.00 0.00 0.00 00:20:42.216 00:20:43.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:43.150 Nvme0n1 : 7.00 8103.29 31.65 0.00 0.00 0.00 0.00 0.00 00:20:43.150 =================================================================================================================== 00:20:43.150 Total : 8103.29 31.65 0.00 0.00 0.00 0.00 0.00 00:20:43.150 00:20:44.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:44.084 Nvme0n1 : 8.00 8085.88 31.59 0.00 0.00 0.00 0.00 0.00 00:20:44.084 =================================================================================================================== 00:20:44.084 Total : 8085.88 31.59 0.00 0.00 0.00 0.00 0.00 00:20:44.084 00:20:45.018 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:45.018 Nvme0n1 : 9.00 8082.33 31.57 0.00 0.00 0.00 0.00 0.00 00:20:45.018 =================================================================================================================== 00:20:45.018 Total : 8082.33 31.57 0.00 0.00 0.00 0.00 0.00 00:20:45.018 00:20:45.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:45.952 Nvme0n1 : 10.00 8058.20 31.48 0.00 0.00 0.00 0.00 0.00 00:20:45.952 =================================================================================================================== 00:20:45.952 Total : 8058.20 31.48 0.00 0.00 0.00 0.00 0.00 00:20:45.952 00:20:45.952 00:20:45.952 Latency(us) 00:20:45.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:45.952 Nvme0n1 : 10.00 8068.51 31.52 0.00 0.00 15858.61 7685.59 32172.22 00:20:45.952 =================================================================================================================== 00:20:45.952 Total : 8068.51 31.52 0.00 0.00 15858.61 7685.59 32172.22 00:20:45.952 0 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90378 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 90378 ']' 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 90378 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90378 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:45.952 killing process with pid 90378 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90378' 00:20:45.952 Received shutdown signal, test time was about 10.000000 seconds 00:20:45.952 00:20:45.952 Latency(us) 00:20:45.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.952 =================================================================================================================== 00:20:45.952 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 90378 00:20:45.952 13:38:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 90378 00:20:46.210 13:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:46.469 13:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:46.727 13:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:46.727 13:38:59 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:20:46.985 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:20:46.985 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:20:46.985 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:47.243 [2024-05-15 13:39:00.306312] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:47.502 2024/05/15 13:39:00 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d592506d-71a8-4851-9fc8-fa7378d47288], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:20:47.502 request: 00:20:47.502 { 00:20:47.502 "method": "bdev_lvol_get_lvstores", 00:20:47.502 "params": { 00:20:47.502 "uuid": "d592506d-71a8-4851-9fc8-fa7378d47288" 00:20:47.502 } 00:20:47.502 } 00:20:47.502 Got JSON-RPC error response 00:20:47.502 GoRPCClient: error on JSON-RPC call 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:47.502 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:48.096 aio_bdev 00:20:48.096 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 211a12f4-b679-4c62-9b22-a8f1f2376bf7 00:20:48.096 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=211a12f4-b679-4c62-9b22-a8f1f2376bf7 00:20:48.096 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:48.096 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:20:48.096 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:48.096 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:48.096 13:39:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:48.355 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 211a12f4-b679-4c62-9b22-a8f1f2376bf7 -t 2000 00:20:48.614 [ 00:20:48.614 { 00:20:48.614 "aliases": [ 00:20:48.614 "lvs/lvol" 00:20:48.614 ], 00:20:48.614 "assigned_rate_limits": { 00:20:48.614 "r_mbytes_per_sec": 0, 00:20:48.614 "rw_ios_per_sec": 0, 00:20:48.614 "rw_mbytes_per_sec": 0, 00:20:48.614 "w_mbytes_per_sec": 0 00:20:48.614 }, 00:20:48.614 "block_size": 4096, 00:20:48.614 "claimed": false, 00:20:48.614 "driver_specific": { 00:20:48.614 "lvol": { 00:20:48.614 "base_bdev": "aio_bdev", 00:20:48.614 "clone": false, 00:20:48.614 "esnap_clone": false, 00:20:48.614 "lvol_store_uuid": "d592506d-71a8-4851-9fc8-fa7378d47288", 00:20:48.614 "num_allocated_clusters": 38, 00:20:48.614 "snapshot": false, 00:20:48.614 "thin_provision": false 00:20:48.614 } 00:20:48.614 }, 00:20:48.614 "name": "211a12f4-b679-4c62-9b22-a8f1f2376bf7", 00:20:48.614 "num_blocks": 38912, 00:20:48.614 "product_name": "Logical Volume", 00:20:48.614 "supported_io_types": { 00:20:48.614 "abort": false, 00:20:48.614 "compare": false, 00:20:48.614 "compare_and_write": false, 00:20:48.614 "flush": false, 00:20:48.614 "nvme_admin": false, 00:20:48.614 "nvme_io": false, 00:20:48.614 "read": true, 00:20:48.614 "reset": true, 00:20:48.614 "unmap": true, 00:20:48.614 "write": true, 00:20:48.614 "write_zeroes": true 00:20:48.614 }, 00:20:48.614 "uuid": "211a12f4-b679-4c62-9b22-a8f1f2376bf7", 00:20:48.614 "zoned": false 00:20:48.614 } 00:20:48.614 ] 00:20:48.614 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:20:48.614 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:48.614 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:20:48.872 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:20:48.872 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:20:48.872 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:49.130 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:20:49.130 13:39:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 211a12f4-b679-4c62-9b22-a8f1f2376bf7 00:20:49.389 13:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d592506d-71a8-4851-9fc8-fa7378d47288 00:20:49.647 13:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:20:49.906 13:39:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:50.163 00:20:50.163 real 0m18.805s 00:20:50.163 user 0m18.143s 00:20:50.163 sys 0m2.272s 00:20:50.163 ************************************ 00:20:50.163 END TEST lvs_grow_clean 00:20:50.163 ************************************ 00:20:50.163 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:50.163 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:20:50.421 ************************************ 00:20:50.421 START TEST lvs_grow_dirty 00:20:50.421 ************************************ 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:50.421 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:50.422 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:20:50.679 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:20:50.679 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:20:50.938 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=36831d11-c107-4765-921a-05b978cde583 00:20:50.938 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:20:50.938 13:39:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:20:51.197 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:20:51.197 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:20:51.197 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 36831d11-c107-4765-921a-05b978cde583 lvol 150 00:20:51.455 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=81a3d868-ed45-4568-a453-4cf5d356215b 00:20:51.456 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:20:51.456 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:20:51.456 [2024-05-15 13:39:04.551452] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:20:51.456 [2024-05-15 13:39:04.551534] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:20:51.714 true 00:20:51.714 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:20:51.714 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:20:51.714 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:20:51.714 13:39:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:52.280 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81a3d868-ed45-4568-a453-4cf5d356215b 00:20:52.280 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:52.538 [2024-05-15 13:39:05.560055] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.538 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=90829 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 90829 /var/tmp/bdevperf.sock 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 90829 ']' 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:52.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:52.797 13:39:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:20:53.056 [2024-05-15 13:39:05.896300] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:20:53.056 [2024-05-15 13:39:05.896401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90829 ] 00:20:53.056 [2024-05-15 13:39:06.015488] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:53.056 [2024-05-15 13:39:06.032623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.056 [2024-05-15 13:39:06.131752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.991 13:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:53.991 13:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:20:53.991 13:39:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:20:54.250 Nvme0n1 00:20:54.250 13:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:20:54.509 [ 00:20:54.509 { 00:20:54.509 "aliases": [ 00:20:54.509 "81a3d868-ed45-4568-a453-4cf5d356215b" 00:20:54.509 ], 00:20:54.509 "assigned_rate_limits": { 00:20:54.509 "r_mbytes_per_sec": 0, 00:20:54.509 "rw_ios_per_sec": 0, 00:20:54.509 "rw_mbytes_per_sec": 0, 00:20:54.509 "w_mbytes_per_sec": 0 00:20:54.509 }, 00:20:54.509 "block_size": 4096, 00:20:54.509 "claimed": false, 00:20:54.509 "driver_specific": { 00:20:54.509 "mp_policy": "active_passive", 00:20:54.509 "nvme": [ 00:20:54.509 { 00:20:54.509 "ctrlr_data": { 00:20:54.509 "ana_reporting": false, 00:20:54.509 "cntlid": 1, 00:20:54.509 "firmware_revision": "24.05", 00:20:54.509 "model_number": "SPDK bdev Controller", 00:20:54.509 "multi_ctrlr": true, 00:20:54.509 "oacs": { 00:20:54.509 "firmware": 0, 00:20:54.509 "format": 0, 00:20:54.509 "ns_manage": 0, 00:20:54.509 "security": 0 00:20:54.509 }, 00:20:54.509 "serial_number": "SPDK0", 00:20:54.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:54.509 "vendor_id": "0x8086" 00:20:54.509 }, 00:20:54.509 "ns_data": { 00:20:54.509 "can_share": true, 00:20:54.509 "id": 1 00:20:54.509 }, 00:20:54.509 "trid": { 00:20:54.509 "adrfam": "IPv4", 00:20:54.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:54.509 "traddr": "10.0.0.2", 00:20:54.509 "trsvcid": "4420", 00:20:54.509 "trtype": "TCP" 00:20:54.509 }, 00:20:54.509 "vs": { 00:20:54.509 "nvme_version": "1.3" 00:20:54.509 } 00:20:54.509 } 00:20:54.509 ] 00:20:54.509 }, 00:20:54.509 "memory_domains": [ 00:20:54.509 { 00:20:54.509 "dma_device_id": "system", 00:20:54.509 "dma_device_type": 1 00:20:54.509 } 00:20:54.509 ], 00:20:54.509 "name": "Nvme0n1", 00:20:54.509 "num_blocks": 38912, 00:20:54.509 "product_name": "NVMe disk", 00:20:54.509 "supported_io_types": { 00:20:54.509 "abort": true, 00:20:54.509 "compare": true, 00:20:54.509 "compare_and_write": true, 00:20:54.509 "flush": true, 00:20:54.509 "nvme_admin": true, 00:20:54.509 "nvme_io": true, 00:20:54.509 "read": true, 00:20:54.509 "reset": true, 00:20:54.509 "unmap": true, 00:20:54.509 "write": true, 00:20:54.509 "write_zeroes": true 00:20:54.509 }, 00:20:54.509 "uuid": "81a3d868-ed45-4568-a453-4cf5d356215b", 00:20:54.509 "zoned": false 00:20:54.509 } 00:20:54.509 ] 00:20:54.509 13:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=90871 00:20:54.509 13:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:54.509 13:39:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:20:54.509 Running I/O for 10 seconds... 00:20:55.447 Latency(us) 00:20:55.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:55.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:55.447 Nvme0n1 : 1.00 8074.00 31.54 0.00 0.00 0.00 0.00 0.00 00:20:55.447 =================================================================================================================== 00:20:55.447 Total : 8074.00 31.54 0.00 0.00 0.00 0.00 0.00 00:20:55.447 00:20:56.405 13:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 36831d11-c107-4765-921a-05b978cde583 00:20:56.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:56.667 Nvme0n1 : 2.00 8050.00 31.45 0.00 0.00 0.00 0.00 0.00 00:20:56.667 =================================================================================================================== 00:20:56.667 Total : 8050.00 31.45 0.00 0.00 0.00 0.00 0.00 00:20:56.667 00:20:56.667 true 00:20:56.667 13:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:20:56.667 13:39:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:20:57.233 13:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:20:57.233 13:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:20:57.233 13:39:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 90871 00:20:57.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:57.491 Nvme0n1 : 3.00 8103.67 31.65 0.00 0.00 0.00 0.00 0.00 00:20:57.491 =================================================================================================================== 00:20:57.491 Total : 8103.67 31.65 0.00 0.00 0.00 0.00 0.00 00:20:57.491 00:20:58.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:58.867 Nvme0n1 : 4.00 8086.00 31.59 0.00 0.00 0.00 0.00 0.00 00:20:58.867 =================================================================================================================== 00:20:58.867 Total : 8086.00 31.59 0.00 0.00 0.00 0.00 0.00 00:20:58.867 00:20:59.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:20:59.434 Nvme0n1 : 5.00 8012.20 31.30 0.00 0.00 0.00 0.00 0.00 00:20:59.434 =================================================================================================================== 00:20:59.434 Total : 8012.20 31.30 0.00 0.00 0.00 0.00 0.00 00:20:59.434 00:21:00.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:00.808 Nvme0n1 : 6.00 8010.83 31.29 0.00 0.00 0.00 0.00 0.00 00:21:00.808 =================================================================================================================== 00:21:00.808 Total : 8010.83 31.29 0.00 0.00 0.00 0.00 0.00 00:21:00.808 00:21:01.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:01.456 Nvme0n1 : 7.00 7642.86 29.85 0.00 0.00 0.00 0.00 0.00 00:21:01.456 =================================================================================================================== 00:21:01.456 Total : 7642.86 29.85 0.00 0.00 0.00 0.00 0.00 00:21:01.456 00:21:02.831 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:02.831 Nvme0n1 : 8.00 7630.88 29.81 0.00 0.00 0.00 0.00 0.00 00:21:02.831 =================================================================================================================== 00:21:02.831 Total : 7630.88 29.81 0.00 0.00 0.00 0.00 0.00 00:21:02.831 00:21:03.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:03.765 Nvme0n1 : 9.00 7632.67 29.82 0.00 0.00 0.00 0.00 0.00 00:21:03.765 =================================================================================================================== 00:21:03.765 Total : 7632.67 29.82 0.00 0.00 0.00 0.00 0.00 00:21:03.765 00:21:04.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:04.698 Nvme0n1 : 10.00 7598.40 29.68 0.00 0.00 0.00 0.00 0.00 00:21:04.698 =================================================================================================================== 00:21:04.698 Total : 7598.40 29.68 0.00 0.00 0.00 0.00 0.00 00:21:04.698 00:21:04.698 00:21:04.699 Latency(us) 00:21:04.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:04.699 Nvme0n1 : 10.00 7608.64 29.72 0.00 0.00 16815.99 4110.89 322198.81 00:21:04.699 =================================================================================================================== 00:21:04.699 Total : 7608.64 29.72 0.00 0.00 16815.99 4110.89 322198.81 00:21:04.699 0 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 90829 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 90829 ']' 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 90829 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90829 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:04.699 killing process with pid 90829 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90829' 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 90829 00:21:04.699 Received shutdown signal, test time was about 10.000000 seconds 00:21:04.699 00:21:04.699 Latency(us) 00:21:04.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.699 =================================================================================================================== 00:21:04.699 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.699 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 90829 00:21:04.957 13:39:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:05.215 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:21:05.473 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:21:05.473 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 90211 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 90211 00:21:05.731 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 90211 Killed "${NVMF_APP[@]}" "$@" 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=91035 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 91035 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 91035 ']' 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:05.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:05.731 13:39:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:05.989 [2024-05-15 13:39:18.874540] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:05.989 [2024-05-15 13:39:18.874665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.989 [2024-05-15 13:39:19.000796] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:05.989 [2024-05-15 13:39:19.018691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.246 [2024-05-15 13:39:19.113743] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.246 [2024-05-15 13:39:19.113802] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.246 [2024-05-15 13:39:19.113814] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.246 [2024-05-15 13:39:19.113823] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.246 [2024-05-15 13:39:19.113830] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.247 [2024-05-15 13:39:19.113857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.811 13:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:06.811 13:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:21:06.811 13:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:06.811 13:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.811 13:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:06.811 13:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.811 13:39:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:07.069 [2024-05-15 13:39:20.130926] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:21:07.069 [2024-05-15 13:39:20.131156] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:21:07.069 [2024-05-15 13:39:20.131316] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 81a3d868-ed45-4568-a453-4cf5d356215b 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=81a3d868-ed45-4568-a453-4cf5d356215b 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:07.327 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:07.585 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81a3d868-ed45-4568-a453-4cf5d356215b -t 2000 00:21:07.844 [ 00:21:07.844 { 00:21:07.844 "aliases": [ 00:21:07.844 "lvs/lvol" 00:21:07.844 ], 00:21:07.844 "assigned_rate_limits": { 00:21:07.844 "r_mbytes_per_sec": 0, 00:21:07.844 "rw_ios_per_sec": 0, 00:21:07.844 "rw_mbytes_per_sec": 0, 00:21:07.844 "w_mbytes_per_sec": 0 00:21:07.844 }, 00:21:07.844 "block_size": 4096, 00:21:07.844 "claimed": false, 00:21:07.844 "driver_specific": { 00:21:07.844 "lvol": { 00:21:07.844 "base_bdev": "aio_bdev", 00:21:07.844 "clone": false, 00:21:07.844 "esnap_clone": false, 00:21:07.844 "lvol_store_uuid": "36831d11-c107-4765-921a-05b978cde583", 00:21:07.844 "num_allocated_clusters": 38, 00:21:07.844 "snapshot": false, 00:21:07.844 "thin_provision": false 00:21:07.844 } 00:21:07.844 }, 00:21:07.844 "name": "81a3d868-ed45-4568-a453-4cf5d356215b", 00:21:07.844 "num_blocks": 38912, 00:21:07.844 "product_name": "Logical Volume", 00:21:07.844 "supported_io_types": { 00:21:07.844 "abort": false, 00:21:07.844 "compare": false, 00:21:07.844 "compare_and_write": false, 00:21:07.844 "flush": false, 00:21:07.844 "nvme_admin": false, 00:21:07.844 "nvme_io": false, 00:21:07.844 "read": true, 00:21:07.844 "reset": true, 00:21:07.844 "unmap": true, 00:21:07.844 "write": true, 00:21:07.844 "write_zeroes": true 00:21:07.844 }, 00:21:07.844 "uuid": "81a3d868-ed45-4568-a453-4cf5d356215b", 00:21:07.844 "zoned": false 00:21:07.844 } 00:21:07.844 ] 00:21:07.844 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:21:07.844 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:07.844 13:39:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:21:08.105 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:21:08.105 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:08.105 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:21:08.402 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:21:08.403 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:08.661 [2024-05-15 13:39:21.600424] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:08.661 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:08.919 2024/05/15 13:39:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:36831d11-c107-4765-921a-05b978cde583], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:21:08.919 request: 00:21:08.919 { 00:21:08.919 "method": "bdev_lvol_get_lvstores", 00:21:08.919 "params": { 00:21:08.919 "uuid": "36831d11-c107-4765-921a-05b978cde583" 00:21:08.919 } 00:21:08.919 } 00:21:08.919 Got JSON-RPC error response 00:21:08.919 GoRPCClient: error on JSON-RPC call 00:21:08.919 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:21:08.919 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:08.919 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:08.919 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:08.919 13:39:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:21:09.177 aio_bdev 00:21:09.177 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 81a3d868-ed45-4568-a453-4cf5d356215b 00:21:09.177 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=81a3d868-ed45-4568-a453-4cf5d356215b 00:21:09.177 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:09.177 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:21:09.177 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:09.177 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:09.177 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:09.435 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 81a3d868-ed45-4568-a453-4cf5d356215b -t 2000 00:21:09.692 [ 00:21:09.692 { 00:21:09.692 "aliases": [ 00:21:09.692 "lvs/lvol" 00:21:09.692 ], 00:21:09.692 "assigned_rate_limits": { 00:21:09.692 "r_mbytes_per_sec": 0, 00:21:09.692 "rw_ios_per_sec": 0, 00:21:09.692 "rw_mbytes_per_sec": 0, 00:21:09.692 "w_mbytes_per_sec": 0 00:21:09.692 }, 00:21:09.692 "block_size": 4096, 00:21:09.692 "claimed": false, 00:21:09.692 "driver_specific": { 00:21:09.692 "lvol": { 00:21:09.692 "base_bdev": "aio_bdev", 00:21:09.692 "clone": false, 00:21:09.692 "esnap_clone": false, 00:21:09.692 "lvol_store_uuid": "36831d11-c107-4765-921a-05b978cde583", 00:21:09.692 "num_allocated_clusters": 38, 00:21:09.692 "snapshot": false, 00:21:09.692 "thin_provision": false 00:21:09.692 } 00:21:09.692 }, 00:21:09.692 "name": "81a3d868-ed45-4568-a453-4cf5d356215b", 00:21:09.692 "num_blocks": 38912, 00:21:09.692 "product_name": "Logical Volume", 00:21:09.692 "supported_io_types": { 00:21:09.692 "abort": false, 00:21:09.692 "compare": false, 00:21:09.692 "compare_and_write": false, 00:21:09.692 "flush": false, 00:21:09.692 "nvme_admin": false, 00:21:09.692 "nvme_io": false, 00:21:09.692 "read": true, 00:21:09.692 "reset": true, 00:21:09.692 "unmap": true, 00:21:09.692 "write": true, 00:21:09.692 "write_zeroes": true 00:21:09.692 }, 00:21:09.692 "uuid": "81a3d868-ed45-4568-a453-4cf5d356215b", 00:21:09.692 "zoned": false 00:21:09.692 } 00:21:09.692 ] 00:21:09.692 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:21:09.692 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:09.692 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:21:09.950 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:21:09.950 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36831d11-c107-4765-921a-05b978cde583 00:21:09.950 13:39:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:21:10.208 13:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:21:10.208 13:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 81a3d868-ed45-4568-a453-4cf5d356215b 00:21:10.467 13:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36831d11-c107-4765-921a-05b978cde583 00:21:10.724 13:39:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:21:10.983 13:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:21:11.550 00:21:11.550 real 0m21.066s 00:21:11.550 user 0m44.016s 00:21:11.550 sys 0m8.255s 00:21:11.550 13:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:11.550 13:39:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:21:11.550 ************************************ 00:21:11.550 END TEST lvs_grow_dirty 00:21:11.550 ************************************ 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:11.551 nvmf_trace.0 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.551 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.551 rmmod nvme_tcp 00:21:11.551 rmmod nvme_fabrics 00:21:11.551 rmmod nvme_keyring 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 91035 ']' 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 91035 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 91035 ']' 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 91035 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91035 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:11.827 killing process with pid 91035 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91035' 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 91035 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 91035 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.827 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.087 13:39:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:12.087 00:21:12.087 real 0m42.383s 00:21:12.087 user 1m8.857s 00:21:12.087 sys 0m11.341s 00:21:12.087 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:12.087 13:39:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:21:12.087 ************************************ 00:21:12.087 END TEST nvmf_lvs_grow 00:21:12.087 ************************************ 00:21:12.087 13:39:24 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:12.087 13:39:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:12.087 13:39:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:12.087 13:39:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:12.087 ************************************ 00:21:12.087 START TEST nvmf_bdev_io_wait 00:21:12.087 ************************************ 00:21:12.087 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:21:12.087 * Looking for test storage... 00:21:12.087 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:12.088 Cannot find device "nvmf_tgt_br" 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.088 Cannot find device "nvmf_tgt_br2" 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:12.088 Cannot find device "nvmf_tgt_br" 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:12.088 Cannot find device "nvmf_tgt_br2" 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:21:12.088 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:12.345 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:12.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:21:12.346 00:21:12.346 --- 10.0.0.2 ping statistics --- 00:21:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.346 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:12.346 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:12.346 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:21:12.346 00:21:12.346 --- 10.0.0.3 ping statistics --- 00:21:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.346 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:12.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:21:12.346 00:21:12.346 --- 10.0.0.1 ping statistics --- 00:21:12.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.346 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.346 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=91454 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 91454 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 91454 ']' 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:12.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:12.604 13:39:25 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:12.604 [2024-05-15 13:39:25.516228] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:12.604 [2024-05-15 13:39:25.516338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.604 [2024-05-15 13:39:25.644111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:12.604 [2024-05-15 13:39:25.665834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.861 [2024-05-15 13:39:25.772116] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.861 [2024-05-15 13:39:25.772180] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.861 [2024-05-15 13:39:25.772194] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.861 [2024-05-15 13:39:25.772205] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.861 [2024-05-15 13:39:25.772214] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.861 [2024-05-15 13:39:25.772327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.861 [2024-05-15 13:39:25.772477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.861 [2024-05-15 13:39:25.773042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.861 [2024-05-15 13:39:25.773061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.426 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:13.426 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:21:13.426 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:13.426 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.426 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 [2024-05-15 13:39:26.635647] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 Malloc0 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:13.685 [2024-05-15 13:39:26.690289] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:13.685 [2024-05-15 13:39:26.690534] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=91507 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=91509 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.685 { 00:21:13.685 "params": { 00:21:13.685 "name": "Nvme$subsystem", 00:21:13.685 "trtype": "$TEST_TRANSPORT", 00:21:13.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.685 "adrfam": "ipv4", 00:21:13.685 "trsvcid": "$NVMF_PORT", 00:21:13.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.685 "hdgst": ${hdgst:-false}, 00:21:13.685 "ddgst": ${ddgst:-false} 00:21:13.685 }, 00:21:13.685 "method": "bdev_nvme_attach_controller" 00:21:13.685 } 00:21:13.685 EOF 00:21:13.685 )") 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=91511 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.685 { 00:21:13.685 "params": { 00:21:13.685 "name": "Nvme$subsystem", 00:21:13.685 "trtype": "$TEST_TRANSPORT", 00:21:13.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.685 "adrfam": "ipv4", 00:21:13.685 "trsvcid": "$NVMF_PORT", 00:21:13.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.685 "hdgst": ${hdgst:-false}, 00:21:13.685 "ddgst": ${ddgst:-false} 00:21:13.685 }, 00:21:13.685 "method": "bdev_nvme_attach_controller" 00:21:13.685 } 00:21:13.685 EOF 00:21:13.685 )") 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=91513 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.685 { 00:21:13.685 "params": { 00:21:13.685 "name": "Nvme$subsystem", 00:21:13.685 "trtype": "$TEST_TRANSPORT", 00:21:13.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.685 "adrfam": "ipv4", 00:21:13.685 "trsvcid": "$NVMF_PORT", 00:21:13.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.685 "hdgst": ${hdgst:-false}, 00:21:13.685 "ddgst": ${ddgst:-false} 00:21:13.685 }, 00:21:13.685 "method": "bdev_nvme_attach_controller" 00:21:13.685 } 00:21:13.685 EOF 00:21:13.685 )") 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:13.685 "params": { 00:21:13.685 "name": "Nvme1", 00:21:13.685 "trtype": "tcp", 00:21:13.685 "traddr": "10.0.0.2", 00:21:13.685 "adrfam": "ipv4", 00:21:13.685 "trsvcid": "4420", 00:21:13.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.685 "hdgst": false, 00:21:13.685 "ddgst": false 00:21:13.685 }, 00:21:13.685 "method": "bdev_nvme_attach_controller" 00:21:13.685 }' 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:13.685 { 00:21:13.685 "params": { 00:21:13.685 "name": "Nvme$subsystem", 00:21:13.685 "trtype": "$TEST_TRANSPORT", 00:21:13.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:13.685 "adrfam": "ipv4", 00:21:13.685 "trsvcid": "$NVMF_PORT", 00:21:13.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:13.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:13.685 "hdgst": ${hdgst:-false}, 00:21:13.685 "ddgst": ${ddgst:-false} 00:21:13.685 }, 00:21:13.685 "method": "bdev_nvme_attach_controller" 00:21:13.685 } 00:21:13.685 EOF 00:21:13.685 )") 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:13.685 "params": { 00:21:13.685 "name": "Nvme1", 00:21:13.685 "trtype": "tcp", 00:21:13.685 "traddr": "10.0.0.2", 00:21:13.685 "adrfam": "ipv4", 00:21:13.685 "trsvcid": "4420", 00:21:13.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.685 "hdgst": false, 00:21:13.685 "ddgst": false 00:21:13.685 }, 00:21:13.685 "method": "bdev_nvme_attach_controller" 00:21:13.685 }' 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:13.685 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:13.686 "params": { 00:21:13.686 "name": "Nvme1", 00:21:13.686 "trtype": "tcp", 00:21:13.686 "traddr": "10.0.0.2", 00:21:13.686 "adrfam": "ipv4", 00:21:13.686 "trsvcid": "4420", 00:21:13.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.686 "hdgst": false, 00:21:13.686 "ddgst": false 00:21:13.686 }, 00:21:13.686 "method": "bdev_nvme_attach_controller" 00:21:13.686 }' 00:21:13.686 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:21:13.686 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:21:13.686 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:13.686 "params": { 00:21:13.686 "name": "Nvme1", 00:21:13.686 "trtype": "tcp", 00:21:13.686 "traddr": "10.0.0.2", 00:21:13.686 "adrfam": "ipv4", 00:21:13.686 "trsvcid": "4420", 00:21:13.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:13.686 "hdgst": false, 00:21:13.686 "ddgst": false 00:21:13.686 }, 00:21:13.686 "method": "bdev_nvme_attach_controller" 00:21:13.686 }' 00:21:13.686 [2024-05-15 13:39:26.752490] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:13.686 [2024-05-15 13:39:26.752585] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:13.686 13:39:26 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 91507 00:21:13.686 [2024-05-15 13:39:26.766693] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:13.686 [2024-05-15 13:39:26.766883] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:21:13.686 [2024-05-15 13:39:26.777378] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:13.686 [2024-05-15 13:39:26.777452] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:21:13.944 [2024-05-15 13:39:26.784574] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:13.944 [2024-05-15 13:39:26.784659] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:21:13.944 [2024-05-15 13:39:26.945240] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:13.944 [2024-05-15 13:39:26.963271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.944 [2024-05-15 13:39:27.016947] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:13.944 [2024-05-15 13:39:27.035294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.944 [2024-05-15 13:39:27.036401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:14.202 [2024-05-15 13:39:27.090864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:14.202 [2024-05-15 13:39:27.110885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.202 [2024-05-15 13:39:27.113465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:21:14.202 [2024-05-15 13:39:27.164855] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:14.202 [2024-05-15 13:39:27.182630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.202 [2024-05-15 13:39:27.186332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:14.202 Running I/O for 1 seconds... 00:21:14.202 [2024-05-15 13:39:27.262062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:14.202 Running I/O for 1 seconds... 00:21:14.461 Running I/O for 1 seconds... 00:21:14.461 Running I/O for 1 seconds... 00:21:15.396 00:21:15.396 Latency(us) 00:21:15.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.396 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:21:15.396 Nvme1n1 : 1.01 9742.11 38.06 0.00 0.00 13081.82 7477.06 21090.68 00:21:15.396 =================================================================================================================== 00:21:15.396 Total : 9742.11 38.06 0.00 0.00 13081.82 7477.06 21090.68 00:21:15.396 00:21:15.396 Latency(us) 00:21:15.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.396 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:21:15.396 Nvme1n1 : 1.01 8656.12 33.81 0.00 0.00 14725.54 7357.91 24427.05 00:21:15.396 =================================================================================================================== 00:21:15.396 Total : 8656.12 33.81 0.00 0.00 14725.54 7357.91 24427.05 00:21:15.396 00:21:15.396 Latency(us) 00:21:15.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.396 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:21:15.396 Nvme1n1 : 1.01 7632.33 29.81 0.00 0.00 16683.53 6166.34 23235.49 00:21:15.396 =================================================================================================================== 00:21:15.396 Total : 7632.33 29.81 0.00 0.00 16683.53 6166.34 23235.49 00:21:15.396 00:21:15.396 Latency(us) 00:21:15.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.397 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:21:15.397 Nvme1n1 : 1.00 195726.43 764.56 0.00 0.00 651.44 277.41 1072.41 00:21:15.397 =================================================================================================================== 00:21:15.397 Total : 195726.43 764.56 0.00 0.00 651.44 277.41 1072.41 00:21:15.397 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 91509 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 91511 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 91513 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:15.655 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:15.655 rmmod nvme_tcp 00:21:15.655 rmmod nvme_fabrics 00:21:15.655 rmmod nvme_keyring 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 91454 ']' 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 91454 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 91454 ']' 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 91454 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91454 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91454' 00:21:15.914 killing process with pid 91454 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 91454 00:21:15.914 [2024-05-15 13:39:28.787865] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 91454 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.914 13:39:28 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.173 13:39:29 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:16.173 00:21:16.173 real 0m4.024s 00:21:16.173 user 0m17.644s 00:21:16.173 sys 0m2.095s 00:21:16.173 13:39:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:16.173 ************************************ 00:21:16.173 13:39:29 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:21:16.173 END TEST nvmf_bdev_io_wait 00:21:16.173 ************************************ 00:21:16.173 13:39:29 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:16.173 13:39:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:16.173 13:39:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:16.173 13:39:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:16.173 ************************************ 00:21:16.173 START TEST nvmf_queue_depth 00:21:16.173 ************************************ 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:21:16.173 * Looking for test storage... 00:21:16.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.173 13:39:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:16.174 Cannot find device "nvmf_tgt_br" 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:16.174 Cannot find device "nvmf_tgt_br2" 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:16.174 Cannot find device "nvmf_tgt_br" 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:16.174 Cannot find device "nvmf_tgt_br2" 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:21:16.174 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:16.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:16.432 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:16.432 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:16.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:16.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:21:16.433 00:21:16.433 --- 10.0.0.2 ping statistics --- 00:21:16.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.433 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:16.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:16.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:21:16.433 00:21:16.433 --- 10.0.0.3 ping statistics --- 00:21:16.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.433 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:16.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:16.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:16.433 00:21:16.433 --- 10.0.0.1 ping statistics --- 00:21:16.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:16.433 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=91745 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 91745 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 91745 ']' 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:16.433 13:39:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:16.691 [2024-05-15 13:39:29.583928] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:16.691 [2024-05-15 13:39:29.584033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.691 [2024-05-15 13:39:29.707925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:16.691 [2024-05-15 13:39:29.728901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.949 [2024-05-15 13:39:29.832685] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.949 [2024-05-15 13:39:29.832782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.949 [2024-05-15 13:39:29.832798] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.949 [2024-05-15 13:39:29.832809] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.949 [2024-05-15 13:39:29.832819] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.949 [2024-05-15 13:39:29.832848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:17.516 [2024-05-15 13:39:30.553078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:17.516 Malloc0 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.516 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:17.517 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.517 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:17.517 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.517 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.517 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.517 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:17.517 [2024-05-15 13:39:30.611981] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:17.517 [2024-05-15 13:39:30.612220] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=91795 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 91795 /var/tmp/bdevperf.sock 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 91795 ']' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:17.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:17.775 13:39:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:17.775 [2024-05-15 13:39:30.698744] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:17.775 [2024-05-15 13:39:30.698839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91795 ] 00:21:17.775 [2024-05-15 13:39:30.822719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:17.775 [2024-05-15 13:39:30.839231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.034 [2024-05-15 13:39:30.935967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.601 13:39:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:18.601 13:39:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:21:18.601 13:39:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.601 13:39:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.601 13:39:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:18.860 NVMe0n1 00:21:18.860 13:39:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.860 13:39:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.860 Running I/O for 10 seconds... 00:21:31.074 00:21:31.074 Latency(us) 00:21:31.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.074 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:21:31.074 Verification LBA range: start 0x0 length 0x4000 00:21:31.074 NVMe0n1 : 10.09 8590.27 33.56 0.00 0.00 118647.48 29669.93 112483.61 00:21:31.074 =================================================================================================================== 00:21:31.074 Total : 8590.27 33.56 0.00 0.00 118647.48 29669.93 112483.61 00:21:31.074 0 00:21:31.074 13:39:41 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 91795 00:21:31.074 13:39:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 91795 ']' 00:21:31.074 13:39:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 91795 00:21:31.074 13:39:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:21:31.074 13:39:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:31.074 13:39:41 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91795 00:21:31.074 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:31.074 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:31.074 killing process with pid 91795 00:21:31.074 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91795' 00:21:31.074 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.074 00:21:31.074 Latency(us) 00:21:31.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.075 =================================================================================================================== 00:21:31.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 91795 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 91795 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.075 rmmod nvme_tcp 00:21:31.075 rmmod nvme_fabrics 00:21:31.075 rmmod nvme_keyring 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 91745 ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 91745 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 91745 ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 91745 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91745 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:31.075 killing process with pid 91745 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91745' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 91745 00:21:31.075 [2024-05-15 13:39:42.348674] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 91745 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:31.075 00:21:31.075 real 0m13.551s 00:21:31.075 user 0m23.471s 00:21:31.075 sys 0m2.046s 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:31.075 ************************************ 00:21:31.075 END TEST nvmf_queue_depth 00:21:31.075 ************************************ 00:21:31.075 13:39:42 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:21:31.075 13:39:42 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:31.075 13:39:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:31.075 13:39:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:31.075 13:39:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.075 ************************************ 00:21:31.075 START TEST nvmf_target_multipath 00:21:31.075 ************************************ 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:21:31.075 * Looking for test storage... 00:21:31.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:31.075 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:31.076 Cannot find device "nvmf_tgt_br" 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:31.076 Cannot find device "nvmf_tgt_br2" 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:31.076 Cannot find device "nvmf_tgt_br" 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:31.076 Cannot find device "nvmf_tgt_br2" 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:31.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:31.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:31.076 13:39:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:31.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:21:31.076 00:21:31.076 --- 10.0.0.2 ping statistics --- 00:21:31.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.076 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:31.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:31.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:31.076 00:21:31.076 --- 10.0.0.3 ping statistics --- 00:21:31.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.076 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:31.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:31.076 00:21:31.076 --- 10.0.0.1 ping statistics --- 00:21:31.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.076 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=92117 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 92117 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 92117 ']' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.076 13:39:43 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:31.076 [2024-05-15 13:39:43.222470] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:31.076 [2024-05-15 13:39:43.222575] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.076 [2024-05-15 13:39:43.349867] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:31.076 [2024-05-15 13:39:43.366208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.076 [2024-05-15 13:39:43.462270] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.076 [2024-05-15 13:39:43.462470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.076 [2024-05-15 13:39:43.462677] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.076 [2024-05-15 13:39:43.462841] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.076 [2024-05-15 13:39:43.462988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.076 [2024-05-15 13:39:43.463268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.076 [2024-05-15 13:39:43.463328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.076 [2024-05-15 13:39:43.463375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.076 [2024-05-15 13:39:43.463382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.334 13:39:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:31.334 13:39:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:21:31.334 13:39:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.334 13:39:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.334 13:39:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:31.334 13:39:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.334 13:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:31.592 [2024-05-15 13:39:44.561314] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.592 13:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:31.849 Malloc0 00:21:31.849 13:39:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:21:32.108 13:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:32.367 13:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:32.625 [2024-05-15 13:39:45.662693] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:32.625 [2024-05-15 13:39:45.663340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.625 13:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:32.883 [2024-05-15 13:39:45.899136] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:32.884 13:39:45 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:21:33.142 13:39:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:21:33.400 13:39:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:21:33.400 13:39:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:21:33.400 13:39:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:21:33.400 13:39:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:21:33.400 13:39:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:21:35.295 13:39:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:21:35.295 13:39:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:21:35.295 13:39:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:21:35.295 13:39:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:21:35.295 13:39:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=92260 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:21:35.296 13:39:48 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:35.296 [global] 00:21:35.296 thread=1 00:21:35.296 invalidate=1 00:21:35.296 rw=randrw 00:21:35.296 time_based=1 00:21:35.296 runtime=6 00:21:35.296 ioengine=libaio 00:21:35.296 direct=1 00:21:35.296 bs=4096 00:21:35.296 iodepth=128 00:21:35.296 norandommap=0 00:21:35.296 numjobs=1 00:21:35.296 00:21:35.296 verify_dump=1 00:21:35.296 verify_backlog=512 00:21:35.296 verify_state_save=0 00:21:35.296 do_verify=1 00:21:35.296 verify=crc32c-intel 00:21:35.296 [job0] 00:21:35.296 filename=/dev/nvme0n1 00:21:35.553 Could not set queue depth (nvme0n1) 00:21:35.553 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:35.553 fio-3.35 00:21:35.553 Starting 1 thread 00:21:36.485 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:36.742 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:37.000 13:39:49 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:37.933 13:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:37.934 13:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:37.934 13:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:37.934 13:39:50 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:38.190 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:38.754 13:39:51 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:39.685 13:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:39.685 13:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:39.685 13:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:39.685 13:39:52 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 92260 00:21:41.582 00:21:41.582 job0: (groupid=0, jobs=1): err= 0: pid=92281: Wed May 15 13:39:54 2024 00:21:41.582 read: IOPS=11.0k, BW=42.9MiB/s (44.9MB/s)(257MiB/6006msec) 00:21:41.582 slat (usec): min=4, max=7545, avg=51.64, stdev=236.21 00:21:41.582 clat (usec): min=1070, max=15063, avg=7933.18, stdev=1215.99 00:21:41.582 lat (usec): min=1184, max=15968, avg=7984.82, stdev=1225.95 00:21:41.582 clat percentiles (usec): 00:21:41.582 | 1.00th=[ 4883], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 7177], 00:21:41.582 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 8094], 00:21:41.582 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[10028], 00:21:41.582 | 99.00th=[11731], 99.50th=[12256], 99.90th=[13566], 99.95th=[14222], 00:21:41.582 | 99.99th=[15008] 00:21:41.582 bw ( KiB/s): min= 7824, max=28856, per=52.84%, avg=23192.73, stdev=6393.42, samples=11 00:21:41.582 iops : min= 1956, max= 7214, avg=5798.18, stdev=1598.36, samples=11 00:21:41.582 write: IOPS=6476, BW=25.3MiB/s (26.5MB/s)(136MiB/5389msec); 0 zone resets 00:21:41.582 slat (usec): min=5, max=2421, avg=64.09, stdev=158.82 00:21:41.582 clat (usec): min=622, max=14503, avg=6852.42, stdev=1030.11 00:21:41.582 lat (usec): min=680, max=14568, avg=6916.51, stdev=1034.50 00:21:41.582 clat percentiles (usec): 00:21:41.582 | 1.00th=[ 3785], 5.00th=[ 5080], 10.00th=[ 5800], 20.00th=[ 6259], 00:21:41.582 | 30.00th=[ 6521], 40.00th=[ 6718], 50.00th=[ 6915], 60.00th=[ 7046], 00:21:41.582 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8225], 00:21:41.582 | 99.00th=[10028], 99.50th=[10945], 99.90th=[12125], 99.95th=[12911], 00:21:41.582 | 99.99th=[13566] 00:21:41.582 bw ( KiB/s): min= 8000, max=28336, per=89.58%, avg=23206.55, stdev=6105.34, samples=11 00:21:41.582 iops : min= 2000, max= 7084, avg=5801.64, stdev=1526.34, samples=11 00:21:41.582 lat (usec) : 750=0.01% 00:21:41.582 lat (msec) : 2=0.04%, 4=0.63%, 10=95.60%, 20=3.73% 00:21:41.582 cpu : usr=5.66%, sys=23.46%, ctx=6401, majf=0, minf=121 00:21:41.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:41.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:41.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:41.582 issued rwts: total=65906,34901,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:41.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:41.582 00:21:41.582 Run status group 0 (all jobs): 00:21:41.582 READ: bw=42.9MiB/s (44.9MB/s), 42.9MiB/s-42.9MiB/s (44.9MB/s-44.9MB/s), io=257MiB (270MB), run=6006-6006msec 00:21:41.582 WRITE: bw=25.3MiB/s (26.5MB/s), 25.3MiB/s-25.3MiB/s (26.5MB/s-26.5MB/s), io=136MiB (143MB), run=5389-5389msec 00:21:41.582 00:21:41.582 Disk stats (read/write): 00:21:41.582 nvme0n1: ios=64888/34237, merge=0/0, ticks=482095/218557, in_queue=700652, util=98.65% 00:21:41.582 13:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:42.155 13:39:54 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:21:42.155 13:39:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:43.528 13:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:43.528 13:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:43.528 13:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:21:43.528 13:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:21:43.528 13:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=92416 00:21:43.528 13:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:21:43.528 13:39:56 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:21:43.528 [global] 00:21:43.528 thread=1 00:21:43.528 invalidate=1 00:21:43.528 rw=randrw 00:21:43.528 time_based=1 00:21:43.528 runtime=6 00:21:43.528 ioengine=libaio 00:21:43.528 direct=1 00:21:43.528 bs=4096 00:21:43.528 iodepth=128 00:21:43.528 norandommap=0 00:21:43.528 numjobs=1 00:21:43.528 00:21:43.528 verify_dump=1 00:21:43.528 verify_backlog=512 00:21:43.528 verify_state_save=0 00:21:43.528 do_verify=1 00:21:43.528 verify=crc32c-intel 00:21:43.528 [job0] 00:21:43.528 filename=/dev/nvme0n1 00:21:43.528 Could not set queue depth (nvme0n1) 00:21:43.528 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:21:43.528 fio-3.35 00:21:43.528 Starting 1 thread 00:21:44.462 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:21:44.462 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:44.720 13:39:57 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:45.653 13:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:45.653 13:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:45.653 13:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:45.653 13:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:45.911 13:39:58 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:46.169 13:39:59 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:21:47.543 13:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:21:47.544 13:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:21:47.544 13:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:21:47.544 13:40:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 92416 00:21:49.441 00:21:49.441 job0: (groupid=0, jobs=1): err= 0: pid=92437: Wed May 15 13:40:02 2024 00:21:49.441 read: IOPS=12.4k, BW=48.5MiB/s (50.9MB/s)(291MiB/6004msec) 00:21:49.441 slat (usec): min=3, max=6741, avg=41.37, stdev=205.37 00:21:49.441 clat (usec): min=189, max=14897, avg=7175.71, stdev=1552.90 00:21:49.441 lat (usec): min=220, max=14908, avg=7217.08, stdev=1571.97 00:21:49.441 clat percentiles (usec): 00:21:49.441 | 1.00th=[ 3195], 5.00th=[ 4359], 10.00th=[ 4948], 20.00th=[ 5866], 00:21:49.441 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7570], 00:21:49.441 | 70.00th=[ 7832], 80.00th=[ 8291], 90.00th=[ 8848], 95.00th=[ 9241], 00:21:49.441 | 99.00th=[11207], 99.50th=[11731], 99.90th=[12387], 99.95th=[12780], 00:21:49.441 | 99.99th=[14222] 00:21:49.441 bw ( KiB/s): min=15824, max=39360, per=54.42%, avg=27026.64, stdev=7310.80, samples=11 00:21:49.441 iops : min= 3956, max= 9840, avg=6756.64, stdev=1827.69, samples=11 00:21:49.441 write: IOPS=7372, BW=28.8MiB/s (30.2MB/s)(149MiB/5160msec); 0 zone resets 00:21:49.441 slat (usec): min=12, max=3777, avg=50.62, stdev=133.49 00:21:49.441 clat (usec): min=564, max=12628, avg=5968.06, stdev=1490.98 00:21:49.441 lat (usec): min=597, max=12663, avg=6018.69, stdev=1505.66 00:21:49.441 clat percentiles (usec): 00:21:49.441 | 1.00th=[ 2573], 5.00th=[ 3359], 10.00th=[ 3752], 20.00th=[ 4359], 00:21:49.441 | 30.00th=[ 5145], 40.00th=[ 6063], 50.00th=[ 6456], 60.00th=[ 6718], 00:21:49.441 | 70.00th=[ 6915], 80.00th=[ 7177], 90.00th=[ 7439], 95.00th=[ 7701], 00:21:49.441 | 99.00th=[ 8717], 99.50th=[10159], 99.90th=[11469], 99.95th=[11863], 00:21:49.441 | 99.99th=[12649] 00:21:49.441 bw ( KiB/s): min=16784, max=39104, per=91.45%, avg=26969.91, stdev=7081.44, samples=11 00:21:49.441 iops : min= 4196, max= 9776, avg=6742.45, stdev=1770.35, samples=11 00:21:49.441 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:21:49.441 lat (msec) : 2=0.21%, 4=6.56%, 10=91.14%, 20=2.07% 00:21:49.441 cpu : usr=5.55%, sys=23.80%, ctx=7362, majf=0, minf=145 00:21:49.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:21:49.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:49.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:49.441 issued rwts: total=74549,38043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:49.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:49.441 00:21:49.441 Run status group 0 (all jobs): 00:21:49.441 READ: bw=48.5MiB/s (50.9MB/s), 48.5MiB/s-48.5MiB/s (50.9MB/s-50.9MB/s), io=291MiB (305MB), run=6004-6004msec 00:21:49.441 WRITE: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=149MiB (156MB), run=5160-5160msec 00:21:49.441 00:21:49.441 Disk stats (read/write): 00:21:49.441 nvme0n1: ios=73002/38043, merge=0/0, ticks=490314/211310, in_queue=701624, util=98.67% 00:21:49.441 13:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:49.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:21:49.699 13:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:49.956 rmmod nvme_tcp 00:21:49.956 rmmod nvme_fabrics 00:21:49.956 rmmod nvme_keyring 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 92117 ']' 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 92117 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 92117 ']' 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 92117 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92117 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92117' 00:21:49.956 killing process with pid 92117 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 92117 00:21:49.956 [2024-05-15 13:40:02.964016] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:49.956 13:40:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 92117 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:50.214 00:21:50.214 real 0m20.565s 00:21:50.214 user 1m20.833s 00:21:50.214 sys 0m6.420s 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:50.214 ************************************ 00:21:50.214 END TEST nvmf_target_multipath 00:21:50.214 ************************************ 00:21:50.214 13:40:03 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 13:40:03 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:50.214 13:40:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:50.214 13:40:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:50.214 13:40:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 ************************************ 00:21:50.214 START TEST nvmf_zcopy 00:21:50.214 ************************************ 00:21:50.214 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:21:50.472 * Looking for test storage... 00:21:50.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:50.472 Cannot find device "nvmf_tgt_br" 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.472 Cannot find device "nvmf_tgt_br2" 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:50.472 Cannot find device "nvmf_tgt_br" 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:50.472 Cannot find device "nvmf_tgt_br2" 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:50.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:50.472 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:50.472 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:50.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:50.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:21:50.730 00:21:50.730 --- 10.0.0.2 ping statistics --- 00:21:50.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.730 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:50.730 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:50.730 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:50.730 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:21:50.730 00:21:50.730 --- 10.0.0.3 ping statistics --- 00:21:50.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.730 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:50.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:50.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:21:50.731 00:21:50.731 --- 10.0.0.1 ping statistics --- 00:21:50.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:50.731 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=92709 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 92709 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 92709 ']' 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:50.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:50.731 13:40:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:50.989 [2024-05-15 13:40:03.831131] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:50.989 [2024-05-15 13:40:03.831233] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.989 [2024-05-15 13:40:03.954142] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:50.989 [2024-05-15 13:40:03.973473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.989 [2024-05-15 13:40:04.079189] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.989 [2024-05-15 13:40:04.079255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.989 [2024-05-15 13:40:04.079270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.989 [2024-05-15 13:40:04.079280] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.989 [2024-05-15 13:40:04.079290] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.989 [2024-05-15 13:40:04.079323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:51.967 [2024-05-15 13:40:04.902043] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.967 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:51.968 [2024-05-15 13:40:04.917936] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:51.968 [2024-05-15 13:40:04.918167] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:51.968 malloc0 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:51.968 { 00:21:51.968 "params": { 00:21:51.968 "name": "Nvme$subsystem", 00:21:51.968 "trtype": "$TEST_TRANSPORT", 00:21:51.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.968 "adrfam": "ipv4", 00:21:51.968 "trsvcid": "$NVMF_PORT", 00:21:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.968 "hdgst": ${hdgst:-false}, 00:21:51.968 "ddgst": ${ddgst:-false} 00:21:51.968 }, 00:21:51.968 "method": "bdev_nvme_attach_controller" 00:21:51.968 } 00:21:51.968 EOF 00:21:51.968 )") 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:21:51.968 13:40:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:51.968 "params": { 00:21:51.968 "name": "Nvme1", 00:21:51.968 "trtype": "tcp", 00:21:51.968 "traddr": "10.0.0.2", 00:21:51.968 "adrfam": "ipv4", 00:21:51.968 "trsvcid": "4420", 00:21:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:51.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:51.968 "hdgst": false, 00:21:51.968 "ddgst": false 00:21:51.968 }, 00:21:51.968 "method": "bdev_nvme_attach_controller" 00:21:51.968 }' 00:21:51.968 [2024-05-15 13:40:05.024748] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:21:51.968 [2024-05-15 13:40:05.024844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92760 ] 00:21:52.226 [2024-05-15 13:40:05.147411] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:52.226 [2024-05-15 13:40:05.166784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.226 [2024-05-15 13:40:05.271154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.484 Running I/O for 10 seconds... 00:22:02.451 00:22:02.451 Latency(us) 00:22:02.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.451 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:22:02.451 Verification LBA range: start 0x0 length 0x1000 00:22:02.451 Nvme1n1 : 10.02 5942.94 46.43 0.00 0.00 21468.30 3574.69 33840.41 00:22:02.451 =================================================================================================================== 00:22:02.451 Total : 5942.94 46.43 0.00 0.00 21468.30 3574.69 33840.41 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=92877 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:02.710 { 00:22:02.710 "params": { 00:22:02.710 "name": "Nvme$subsystem", 00:22:02.710 "trtype": "$TEST_TRANSPORT", 00:22:02.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:02.710 "adrfam": "ipv4", 00:22:02.710 "trsvcid": "$NVMF_PORT", 00:22:02.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:02.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:02.710 "hdgst": ${hdgst:-false}, 00:22:02.710 "ddgst": ${ddgst:-false} 00:22:02.710 }, 00:22:02.710 "method": "bdev_nvme_attach_controller" 00:22:02.710 } 00:22:02.710 EOF 00:22:02.710 )") 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:22:02.710 [2024-05-15 13:40:15.697524] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.697575] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:22:02.710 13:40:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:02.710 "params": { 00:22:02.710 "name": "Nvme1", 00:22:02.710 "trtype": "tcp", 00:22:02.710 "traddr": "10.0.0.2", 00:22:02.710 "adrfam": "ipv4", 00:22:02.710 "trsvcid": "4420", 00:22:02.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:02.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:02.710 "hdgst": false, 00:22:02.710 "ddgst": false 00:22:02.710 }, 00:22:02.710 "method": "bdev_nvme_attach_controller" 00:22:02.710 }' 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.709492] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.709524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.721499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.721530] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.733493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.733527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.745498] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.745529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 [2024-05-15 13:40:15.748383] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:22:02.710 [2024-05-15 13:40:15.748464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92877 ] 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.757502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.757535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.769526] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.769570] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.781540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.781586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.710 [2024-05-15 13:40:15.793528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.710 [2024-05-15 13:40:15.793563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.710 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.711 [2024-05-15 13:40:15.805532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.711 [2024-05-15 13:40:15.805568] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.969 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.969 [2024-05-15 13:40:15.813506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.969 [2024-05-15 13:40:15.813537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.969 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.969 [2024-05-15 13:40:15.825521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.969 [2024-05-15 13:40:15.825555] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.969 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.969 [2024-05-15 13:40:15.837529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.969 [2024-05-15 13:40:15.837567] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.969 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.969 [2024-05-15 13:40:15.845512] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.969 [2024-05-15 13:40:15.845543] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.969 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.969 [2024-05-15 13:40:15.857525] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.969 [2024-05-15 13:40:15.857558] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.969 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.969 [2024-05-15 13:40:15.869549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.969 [2024-05-15 13:40:15.869587] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.969 [2024-05-15 13:40:15.872924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.881549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.881586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.889292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.970 [2024-05-15 13:40:15.893574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.893624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.905582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.905646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.917561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.917595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.929574] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.929634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.941568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.941598] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.953608] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.953659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.961568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.961600] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.973577] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.973617] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.985611] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.985653] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:15.991217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.970 [2024-05-15 13:40:15.997600] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:15.997646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:16.009648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:16.009690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:16.017615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:16.017645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:16.029638] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:16.029682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:16.041657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:16.041701] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:16.053686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:16.053732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:02.970 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:02.970 [2024-05-15 13:40:16.065645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:02.970 [2024-05-15 13:40:16.065690] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.077681] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.077725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.089651] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.089688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.101654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.101691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.113647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.113680] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.125644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.125676] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.137643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.137692] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.145637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.145669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.157659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.157689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.165647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.165679] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 Running I/O for 5 seconds... 00:22:03.229 [2024-05-15 13:40:16.177655] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.177688] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.192366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.192431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.207253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.207317] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.216580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.216646] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.232420] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.232469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.242928] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.242968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.256211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.256255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.266224] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.266265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.280831] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.280875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.297819] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.297861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.229 [2024-05-15 13:40:16.315332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.229 [2024-05-15 13:40:16.315399] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.229 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.331006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.331056] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.347456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.347513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.364235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.364281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.374572] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.374619] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.389054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.389090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.405816] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.405855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.421101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.421143] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.436546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.436610] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.446063] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.446109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.462762] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.462833] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.480699] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.480780] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.499318] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.499379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.516141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.516204] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.527921] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.527972] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.546311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.546369] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.560054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.560101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.488 [2024-05-15 13:40:16.578519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.488 [2024-05-15 13:40:16.578571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.488 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.594764] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.594812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.606371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.606415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.623244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.623297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.640965] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.641010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.658822] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.658866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.675724] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.675772] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.692784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.692836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.709436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.709499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.721550] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.721625] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.738784] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.738848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.756047] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.756112] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.773865] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.773923] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.790960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.791024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.808889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.808941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.747 [2024-05-15 13:40:16.825975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.747 [2024-05-15 13:40:16.826026] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:03.747 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:03.748 [2024-05-15 13:40:16.843811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:03.748 [2024-05-15 13:40:16.843863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.861916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.861963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.879917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.879963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.895992] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.896036] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.912994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.913173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.929069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.929264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.939501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.939758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.954685] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.954856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.007 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.007 [2024-05-15 13:40:16.965070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.007 [2024-05-15 13:40:16.965268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:16.981096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:16.981281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:16.991712] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:16.991856] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.002933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.003078] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.019337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.019382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.029970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.030012] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.044773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.044831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.055383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.055429] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.070138] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.070192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.081466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.081503] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.008 [2024-05-15 13:40:17.091926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.008 [2024-05-15 13:40:17.091961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.008 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.267 [2024-05-15 13:40:17.106770] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.267 [2024-05-15 13:40:17.106804] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.267 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.267 [2024-05-15 13:40:17.116763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.267 [2024-05-15 13:40:17.116799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.267 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.267 [2024-05-15 13:40:17.131948] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.267 [2024-05-15 13:40:17.131987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.267 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.267 [2024-05-15 13:40:17.149313] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.267 [2024-05-15 13:40:17.149350] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.267 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.267 [2024-05-15 13:40:17.165951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.166008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.183100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.183162] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.200424] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.200484] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.210705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.210750] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.225167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.225205] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.242167] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.242206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.257145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.257181] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.273229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.273269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.289997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.290042] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.305156] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.305192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.321203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.321241] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.337639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.337678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.268 [2024-05-15 13:40:17.355675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.268 [2024-05-15 13:40:17.355719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.268 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.370750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.370792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.382793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.382959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.399716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.399901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.415740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.415911] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.426520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.426892] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.442269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.442320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.459158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.459195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.469737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.469775] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.484135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.484172] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.500243] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.500299] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.509458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.509517] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.524957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.525008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.535007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.535163] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.550330] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.550482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.561004] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.561161] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.572166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.572203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.527 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.527 [2024-05-15 13:40:17.589270] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.527 [2024-05-15 13:40:17.589307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.528 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.528 [2024-05-15 13:40:17.606937] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.528 [2024-05-15 13:40:17.607000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.528 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.528 [2024-05-15 13:40:17.622023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.528 [2024-05-15 13:40:17.622060] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.638664] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.638700] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.654535] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.654589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.671443] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.671499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.688404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.688440] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.703674] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.703719] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.720730] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.720806] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.736901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.737145] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.754201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.754244] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.769644] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.769678] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.786 [2024-05-15 13:40:17.779872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.786 [2024-05-15 13:40:17.779936] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.786 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.787 [2024-05-15 13:40:17.794228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.787 [2024-05-15 13:40:17.794261] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.787 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.787 [2024-05-15 13:40:17.811113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.787 [2024-05-15 13:40:17.811157] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.787 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.787 [2024-05-15 13:40:17.821692] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.787 [2024-05-15 13:40:17.821739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.787 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.787 [2024-05-15 13:40:17.832301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.787 [2024-05-15 13:40:17.832336] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.787 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.787 [2024-05-15 13:40:17.843103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.787 [2024-05-15 13:40:17.843151] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.787 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.787 [2024-05-15 13:40:17.858365] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.787 [2024-05-15 13:40:17.858420] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.787 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:04.787 [2024-05-15 13:40:17.875231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:04.787 [2024-05-15 13:40:17.875269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:04.787 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.045 [2024-05-15 13:40:17.891091] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.045 [2024-05-15 13:40:17.891131] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.045 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.045 [2024-05-15 13:40:17.900738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.045 [2024-05-15 13:40:17.900773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.045 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.045 [2024-05-15 13:40:17.916805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.045 [2024-05-15 13:40:17.916848] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.045 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.045 [2024-05-15 13:40:17.932294] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.045 [2024-05-15 13:40:17.932333] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.045 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.045 [2024-05-15 13:40:17.949742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.045 [2024-05-15 13:40:17.949808] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.045 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.045 [2024-05-15 13:40:17.966135] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.045 [2024-05-15 13:40:17.966167] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.045 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.045 [2024-05-15 13:40:17.981739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.045 [2024-05-15 13:40:17.981770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:17.992005] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:17.992043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.006729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.006762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.018332] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.018366] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.035381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.035424] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.050284] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.050322] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.064935] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.064965] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.080785] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.080827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.095546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.095590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.112560] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.112639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.046 [2024-05-15 13:40:18.128461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.046 [2024-05-15 13:40:18.128514] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.046 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.143947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.143996] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.160268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.160313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.176945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.176988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.191874] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.191913] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.209070] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.209135] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.224683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.224734] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.235231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.235264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.249408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.249439] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.265186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.265217] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.282970] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.283013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.293280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.293313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.307985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.308045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.319376] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.319408] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.334910] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.334955] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.351897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.351960] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.366391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.366444] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.382239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.382295] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.314 [2024-05-15 13:40:18.399186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.314 [2024-05-15 13:40:18.399237] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.314 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.603 [2024-05-15 13:40:18.415271] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.603 [2024-05-15 13:40:18.415311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.603 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.603 [2024-05-15 13:40:18.431388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.603 [2024-05-15 13:40:18.431441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.603 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.603 [2024-05-15 13:40:18.441989] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.603 [2024-05-15 13:40:18.442044] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.603 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.603 [2024-05-15 13:40:18.456583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.603 [2024-05-15 13:40:18.456654] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.603 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.603 [2024-05-15 13:40:18.477035] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.603 [2024-05-15 13:40:18.477091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.493485] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.493528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.503581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.503629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.518264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.518298] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.529068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.529101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.544026] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.544076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.560418] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.560454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.578844] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.578899] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.594247] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.594307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.610346] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.610389] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.620889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.620920] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.635522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.635553] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.651437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.651470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.668451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.668501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.604 [2024-05-15 13:40:18.684240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.604 [2024-05-15 13:40:18.684277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.604 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.702825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.702882] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.718793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.718844] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.728902] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.728933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.743320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.743354] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.758546] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.758609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.769557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.769614] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.784186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.784235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.794580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.794620] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.809359] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.809394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.824545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.824589] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.834438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.834488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.848997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.849040] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.864131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.864177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.881548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.881584] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.896366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.896410] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.912734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.912770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.927906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.927951] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.944542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.944574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:05.863 [2024-05-15 13:40:18.960496] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:05.863 [2024-05-15 13:40:18.960528] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:05.863 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:18.971442] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:18.971497] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:18.987147] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:18.987192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:19.003713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:19.003762] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:19.019310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:19.019356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:19.035636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:19.035685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:19.052940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:19.052987] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:19.068543] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:19.068591] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:19.084777] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:19.084824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.122 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.122 [2024-05-15 13:40:19.101834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.122 [2024-05-15 13:40:19.101870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.123 [2024-05-15 13:40:19.117398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.123 [2024-05-15 13:40:19.117454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.123 [2024-05-15 13:40:19.127395] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.123 [2024-05-15 13:40:19.127431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.123 [2024-05-15 13:40:19.143645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.123 [2024-05-15 13:40:19.143681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.123 [2024-05-15 13:40:19.158738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.123 [2024-05-15 13:40:19.158802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.123 [2024-05-15 13:40:19.174988] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.123 [2024-05-15 13:40:19.175037] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.123 [2024-05-15 13:40:19.192897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.123 [2024-05-15 13:40:19.192944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.123 [2024-05-15 13:40:19.208727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.123 [2024-05-15 13:40:19.208767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.123 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.226557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.226594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.241908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.241942] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.252002] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.252047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.266936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.266975] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.284527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.284572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.299258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.299294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.314317] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.314358] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.324614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.324645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.334907] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.334944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.345675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.345715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.363304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.363360] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.378654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.378695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.389015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.389051] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.403897] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.403934] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.414657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.414702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.429269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.429313] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.439530] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.439562] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.454068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.454101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.382 [2024-05-15 13:40:19.469253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.382 [2024-05-15 13:40:19.469292] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.382 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.480741] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.480776] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.498327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.498384] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.513057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.513100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.528239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.528282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.538533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.538569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.552711] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.552768] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.569731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.569771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.585170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.585213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.595408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.595450] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.606145] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.606184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.618465] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.618636] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.629236] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.629495] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.644265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.644307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.654966] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.655016] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.670057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.670114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.687079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.687140] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.702333] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.702379] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.719054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.719116] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.642 [2024-05-15 13:40:19.734722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.642 [2024-05-15 13:40:19.734761] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.642 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.745017] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.745054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.759667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.759718] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.769767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.769802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.784445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.784482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.801662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.801698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.816686] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.816728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.831568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.831618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.847240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.847282] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.857851] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.857889] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.872126] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.872178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.887144] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.887193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.902980] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.903031] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.913304] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.913349] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.927573] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.927630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.943217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.943256] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.953728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.953773] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.968206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.968245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:06.902 [2024-05-15 13:40:19.983539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:06.902 [2024-05-15 13:40:19.983577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:06.902 2024/05/15 13:40:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.162 [2024-05-15 13:40:20.002023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.162 [2024-05-15 13:40:20.002065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.162 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.162 [2024-05-15 13:40:20.017783] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.162 [2024-05-15 13:40:20.017826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.034229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.034289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.051038] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.051077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.061321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.061357] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.075581] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.075634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.092212] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.092251] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.107267] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.107306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.122695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.122732] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.139029] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.139068] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.155501] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.155557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.174203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.174250] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.189159] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.189200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.201281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.201452] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.219497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.219711] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.234506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.234684] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.163 [2024-05-15 13:40:20.249456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.163 [2024-05-15 13:40:20.249702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.163 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.265101] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.265150] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.274391] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.274430] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.290290] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.290331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.305401] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.305441] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.320994] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.321032] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.331366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.331403] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.345763] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.345799] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.363031] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.363095] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.421 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.421 [2024-05-15 13:40:20.383830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.421 [2024-05-15 13:40:20.383875] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.398905] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.398944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.414519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.414557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.429892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.429948] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.445745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.445803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.464884] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.464930] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.479540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.479578] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.489223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.489274] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.422 [2024-05-15 13:40:20.504889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.422 [2024-05-15 13:40:20.504927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.422 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.521218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.521269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.531413] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.531449] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.542419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.542455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.553061] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.553099] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.568027] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.568080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.583780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.583821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.599269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.599306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.617201] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.617238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.631596] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.631645] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.647225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.647262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.659679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.659859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.678305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.678464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.693013] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.680 [2024-05-15 13:40:20.693072] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.680 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.680 [2024-05-15 13:40:20.708434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.681 [2024-05-15 13:40:20.708469] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.681 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.681 [2024-05-15 13:40:20.718877] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.681 [2024-05-15 13:40:20.718914] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.681 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.681 [2024-05-15 13:40:20.733547] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.681 [2024-05-15 13:40:20.733599] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.681 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.681 [2024-05-15 13:40:20.750311] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.681 [2024-05-15 13:40:20.750346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.681 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.681 [2024-05-15 13:40:20.765504] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.681 [2024-05-15 13:40:20.765556] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.681 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.681 [2024-05-15 13:40:20.776268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.681 [2024-05-15 13:40:20.776454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.791327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.791492] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.805906] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.806091] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.823188] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.823230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.838984] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.839024] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.857419] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.857466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.868206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.868245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.878506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.878547] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.893189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.893231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.910312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.910353] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.925826] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.925869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.942701] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.942744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.958778] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.958832] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.976337] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.976386] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.986595] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.986649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:20.997157] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:20.997203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.939 2024/05/15 13:40:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.939 [2024-05-15 13:40:21.008293] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.939 [2024-05-15 13:40:21.008328] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.940 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.940 [2024-05-15 13:40:21.021373] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.940 [2024-05-15 13:40:21.021411] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:07.940 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:07.940 [2024-05-15 13:40:21.036430] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:07.940 [2024-05-15 13:40:21.036477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.197 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.197 [2024-05-15 13:40:21.048131] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.197 [2024-05-15 13:40:21.048190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.197 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.197 [2024-05-15 13:40:21.059962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.197 [2024-05-15 13:40:21.060010] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.197 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.197 [2024-05-15 13:40:21.074631] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.197 [2024-05-15 13:40:21.074669] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.197 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.197 [2024-05-15 13:40:21.086971] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.197 [2024-05-15 13:40:21.087007] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.104847] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.104894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.115463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.115527] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.126713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.126771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.139634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.139681] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.156045] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.156100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.166850] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.166894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.179170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.179214] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.187238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.187278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 00:22:08.198 Latency(us) 00:22:08.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:08.198 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:22:08.198 Nvme1n1 : 5.01 11416.50 89.19 0.00 0.00 11196.11 4796.04 23950.43 00:22:08.198 =================================================================================================================== 00:22:08.198 Total : 11416.50 89.19 0.00 0.00 11196.11 4796.04 23950.43 00:22:08.198 [2024-05-15 13:40:21.193689] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.193728] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.201691] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.201730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.209707] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.209756] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.217713] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.217763] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.225716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.225764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.233729] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.233778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.241731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.241778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.249728] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.249777] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.257738] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.257788] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.265750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.265805] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.273776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.273830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.198 [2024-05-15 13:40:21.285780] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.198 [2024-05-15 13:40:21.285847] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.198 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.455 [2024-05-15 13:40:21.297795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.455 [2024-05-15 13:40:21.297854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.455 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.455 [2024-05-15 13:40:21.309774] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.455 [2024-05-15 13:40:21.309827] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.455 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.321806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.321867] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.329787] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.329846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.341805] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.341865] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.353811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.353874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.361767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.361812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.369769] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.369812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.377772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.377819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.389790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.389837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.397775] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.397819] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.405761] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.405798] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 [2024-05-15 13:40:21.413750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:22:08.456 [2024-05-15 13:40:21.413789] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:08.456 2024/05/15 13:40:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:08.456 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (92877) - No such process 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 92877 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:08.456 delay0 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.456 13:40:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:22:08.713 [2024-05-15 13:40:21.616548] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:22:16.814 Initializing NVMe Controllers 00:22:16.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.814 Initialization complete. Launching workers. 00:22:16.814 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 261, failed: 17880 00:22:16.814 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 18049, failed to submit 92 00:22:16.814 success 17937, unsuccess 112, failed 0 00:22:16.814 13:40:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:22:16.814 13:40:28 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:22:16.814 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:16.814 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:22:16.814 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.814 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:22:16.814 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.815 rmmod nvme_tcp 00:22:16.815 rmmod nvme_fabrics 00:22:16.815 rmmod nvme_keyring 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 92709 ']' 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 92709 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 92709 ']' 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 92709 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92709 00:22:16.815 killing process with pid 92709 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92709' 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 92709 00:22:16.815 [2024-05-15 13:40:28.751384] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 92709 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.815 13:40:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.815 13:40:29 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:16.815 00:22:16.815 real 0m25.722s 00:22:16.815 user 0m41.014s 00:22:16.815 sys 0m7.388s 00:22:16.815 13:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:16.815 13:40:29 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:22:16.815 ************************************ 00:22:16.815 END TEST nvmf_zcopy 00:22:16.815 ************************************ 00:22:16.815 13:40:29 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:16.815 13:40:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:16.815 13:40:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:16.815 13:40:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:16.815 ************************************ 00:22:16.815 START TEST nvmf_nmic 00:22:16.815 ************************************ 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:22:16.815 * Looking for test storage... 00:22:16.815 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:16.815 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:16.816 Cannot find device "nvmf_tgt_br" 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:16.816 Cannot find device "nvmf_tgt_br2" 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:16.816 Cannot find device "nvmf_tgt_br" 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:16.816 Cannot find device "nvmf_tgt_br2" 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:16.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:16.816 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:16.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:22:16.816 00:22:16.816 --- 10.0.0.2 ping statistics --- 00:22:16.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.816 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:16.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:16.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:22:16.816 00:22:16.816 --- 10.0.0.3 ping statistics --- 00:22:16.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.816 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:16.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:16.816 00:22:16.816 --- 10.0.0.1 ping statistics --- 00:22:16.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.816 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:16.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=93212 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 93212 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 93212 ']' 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:16.816 13:40:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:16.816 [2024-05-15 13:40:29.609399] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:22:16.816 [2024-05-15 13:40:29.609527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.816 [2024-05-15 13:40:29.741303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.816 [2024-05-15 13:40:29.762093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.816 [2024-05-15 13:40:29.868949] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.816 [2024-05-15 13:40:29.869470] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.816 [2024-05-15 13:40:29.869799] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.816 [2024-05-15 13:40:29.870106] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.816 [2024-05-15 13:40:29.870410] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.816 [2024-05-15 13:40:29.870748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.816 [2024-05-15 13:40:29.870847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.816 [2024-05-15 13:40:29.871004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.816 [2024-05-15 13:40:29.871751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 [2024-05-15 13:40:30.719458] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 Malloc0 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 [2024-05-15 13:40:30.781336] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:17.750 [2024-05-15 13:40:30.781667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.750 test case1: single bdev can't be used in multiple subsystems 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 [2024-05-15 13:40:30.809419] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:22:17.750 [2024-05-15 13:40:30.809468] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:22:17.750 [2024-05-15 13:40:30.809482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:22:17.750 2024/05/15 13:40:30 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:22:17.750 request: 00:22:17.750 { 00:22:17.750 "method": "nvmf_subsystem_add_ns", 00:22:17.750 "params": { 00:22:17.750 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:22:17.750 "namespace": { 00:22:17.750 "bdev_name": "Malloc0", 00:22:17.750 "no_auto_visible": false 00:22:17.750 } 00:22:17.750 } 00:22:17.750 } 00:22:17.750 Got JSON-RPC error response 00:22:17.750 GoRPCClient: error on JSON-RPC call 00:22:17.750 Adding namespace failed - expected result. 00:22:17.750 test case2: host connect to nvmf target in multiple paths 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:17.750 [2024-05-15 13:40:30.821581] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.750 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:18.008 13:40:30 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:22:18.266 13:40:31 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:22:18.266 13:40:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:22:18.266 13:40:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.266 13:40:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:22:18.266 13:40:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:22:20.165 13:40:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:20.165 13:40:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:20.165 13:40:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:22:20.165 13:40:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:22:20.165 13:40:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.165 13:40:33 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:22:20.165 13:40:33 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:20.165 [global] 00:22:20.165 thread=1 00:22:20.165 invalidate=1 00:22:20.165 rw=write 00:22:20.165 time_based=1 00:22:20.165 runtime=1 00:22:20.165 ioengine=libaio 00:22:20.165 direct=1 00:22:20.165 bs=4096 00:22:20.165 iodepth=1 00:22:20.165 norandommap=0 00:22:20.165 numjobs=1 00:22:20.165 00:22:20.165 verify_dump=1 00:22:20.165 verify_backlog=512 00:22:20.165 verify_state_save=0 00:22:20.165 do_verify=1 00:22:20.165 verify=crc32c-intel 00:22:20.165 [job0] 00:22:20.165 filename=/dev/nvme0n1 00:22:20.165 Could not set queue depth (nvme0n1) 00:22:20.423 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:20.423 fio-3.35 00:22:20.423 Starting 1 thread 00:22:21.357 00:22:21.357 job0: (groupid=0, jobs=1): err= 0: pid=93321: Wed May 15 13:40:34 2024 00:22:21.357 read: IOPS=2975, BW=11.6MiB/s (12.2MB/s)(11.6MiB/1001msec) 00:22:21.357 slat (usec): min=16, max=109, avg=23.50, stdev= 5.38 00:22:21.357 clat (usec): min=132, max=670, avg=157.41, stdev=21.40 00:22:21.357 lat (usec): min=150, max=694, avg=180.90, stdev=23.51 00:22:21.357 clat percentiles (usec): 00:22:21.357 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:22:21.357 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:22:21.357 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 190], 00:22:21.357 | 99.00th=[ 212], 99.50th=[ 231], 99.90th=[ 396], 99.95th=[ 457], 00:22:21.357 | 99.99th=[ 668] 00:22:21.357 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:22:21.357 slat (usec): min=24, max=130, avg=32.22, stdev= 7.55 00:22:21.357 clat (usec): min=30, max=852, avg=113.22, stdev=23.74 00:22:21.357 lat (usec): min=119, max=881, avg=145.44, stdev=26.09 00:22:21.357 clat percentiles (usec): 00:22:21.357 | 1.00th=[ 96], 5.00th=[ 98], 10.00th=[ 100], 20.00th=[ 102], 00:22:21.357 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 112], 00:22:21.357 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 130], 95.00th=[ 139], 00:22:21.357 | 99.00th=[ 159], 99.50th=[ 184], 99.90th=[ 371], 99.95th=[ 553], 00:22:21.357 | 99.99th=[ 857] 00:22:21.357 bw ( KiB/s): min=13560, max=13560, per=100.00%, avg=13560.00, stdev= 0.00, samples=1 00:22:21.357 iops : min= 3390, max= 3390, avg=3390.00, stdev= 0.00, samples=1 00:22:21.357 lat (usec) : 50=0.02%, 100=5.31%, 250=94.33%, 500=0.30%, 750=0.03% 00:22:21.357 lat (usec) : 1000=0.02% 00:22:21.357 cpu : usr=3.00%, sys=12.10%, ctx=6050, majf=0, minf=2 00:22:21.357 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:21.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:21.357 issued rwts: total=2978,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:21.357 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:21.357 00:22:21.357 Run status group 0 (all jobs): 00:22:21.357 READ: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=11.6MiB (12.2MB), run=1001-1001msec 00:22:21.357 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:22:21.357 00:22:21.357 Disk stats (read/write): 00:22:21.357 nvme0n1: ios=2610/2956, merge=0/0, ticks=436/369, in_queue=805, util=91.28% 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:21.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:21.617 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:21.617 rmmod nvme_tcp 00:22:21.617 rmmod nvme_fabrics 00:22:21.874 rmmod nvme_keyring 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 93212 ']' 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 93212 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 93212 ']' 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 93212 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93212 00:22:21.874 killing process with pid 93212 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93212' 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 93212 00:22:21.874 [2024-05-15 13:40:34.766201] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:21.874 13:40:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 93212 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:22.132 00:22:22.132 real 0m5.984s 00:22:22.132 user 0m20.076s 00:22:22.132 sys 0m1.482s 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:22.132 13:40:35 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:22:22.132 ************************************ 00:22:22.132 END TEST nvmf_nmic 00:22:22.132 ************************************ 00:22:22.132 13:40:35 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:22.132 13:40:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:22.132 13:40:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:22.132 13:40:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:22.132 ************************************ 00:22:22.132 START TEST nvmf_fio_target 00:22:22.132 ************************************ 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:22:22.132 * Looking for test storage... 00:22:22.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:22.132 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:22.133 Cannot find device "nvmf_tgt_br" 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:22.133 Cannot find device "nvmf_tgt_br2" 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:22.133 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:22.390 Cannot find device "nvmf_tgt_br" 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:22.390 Cannot find device "nvmf_tgt_br2" 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:22.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:22.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:22.390 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:22.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:22:22.391 00:22:22.391 --- 10.0.0.2 ping statistics --- 00:22:22.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.391 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:22.391 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:22.391 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:22:22.391 00:22:22.391 --- 10.0.0.3 ping statistics --- 00:22:22.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.391 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:22:22.391 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:22.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:22.647 00:22:22.647 --- 10.0.0.1 ping statistics --- 00:22:22.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.647 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=93503 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 93503 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 93503 ']' 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:22.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:22.648 13:40:35 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.648 [2024-05-15 13:40:35.579094] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:22:22.648 [2024-05-15 13:40:35.579211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.648 [2024-05-15 13:40:35.705452] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:22.648 [2024-05-15 13:40:35.719835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:22.905 [2024-05-15 13:40:35.821345] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.905 [2024-05-15 13:40:35.821829] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.905 [2024-05-15 13:40:35.822089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.905 [2024-05-15 13:40:35.822428] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.905 [2024-05-15 13:40:35.822789] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.905 [2024-05-15 13:40:35.828522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.905 [2024-05-15 13:40:35.828631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:22.905 [2024-05-15 13:40:35.828929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:22.905 [2024-05-15 13:40:35.828988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.838 13:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:23.838 13:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:22:23.838 13:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:23.838 13:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.838 13:40:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.838 13:40:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.838 13:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:24.095 [2024-05-15 13:40:36.952023] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.095 13:40:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:24.352 13:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:22:24.352 13:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:24.609 13:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:22:24.609 13:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:24.867 13:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:22:24.867 13:40:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:25.126 13:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:22:25.126 13:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:22:25.384 13:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:25.641 13:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:22:25.641 13:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:25.899 13:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:22:25.899 13:40:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:26.156 13:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:22:26.156 13:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:22:26.414 13:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:26.672 13:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:26.672 13:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:26.929 13:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:22:26.929 13:40:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:27.186 13:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.444 [2024-05-15 13:40:40.381239] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:27.444 [2024-05-15 13:40:40.381946] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.444 13:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:22:27.702 13:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:22:27.959 13:40:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:28.218 13:40:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:22:28.218 13:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:22:28.218 13:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:22:28.218 13:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:22:28.218 13:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:22:28.218 13:40:41 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:22:30.117 13:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:22:30.117 13:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:22:30.117 13:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:22:30.117 13:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:22:30.117 13:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:22:30.117 13:40:43 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:22:30.117 13:40:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:22:30.117 [global] 00:22:30.117 thread=1 00:22:30.117 invalidate=1 00:22:30.117 rw=write 00:22:30.117 time_based=1 00:22:30.117 runtime=1 00:22:30.117 ioengine=libaio 00:22:30.117 direct=1 00:22:30.117 bs=4096 00:22:30.117 iodepth=1 00:22:30.117 norandommap=0 00:22:30.117 numjobs=1 00:22:30.117 00:22:30.117 verify_dump=1 00:22:30.117 verify_backlog=512 00:22:30.117 verify_state_save=0 00:22:30.117 do_verify=1 00:22:30.117 verify=crc32c-intel 00:22:30.117 [job0] 00:22:30.118 filename=/dev/nvme0n1 00:22:30.118 [job1] 00:22:30.118 filename=/dev/nvme0n2 00:22:30.118 [job2] 00:22:30.118 filename=/dev/nvme0n3 00:22:30.118 [job3] 00:22:30.118 filename=/dev/nvme0n4 00:22:30.118 Could not set queue depth (nvme0n1) 00:22:30.118 Could not set queue depth (nvme0n2) 00:22:30.118 Could not set queue depth (nvme0n3) 00:22:30.118 Could not set queue depth (nvme0n4) 00:22:30.376 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:30.376 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:30.376 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:30.376 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:30.376 fio-3.35 00:22:30.376 Starting 4 threads 00:22:31.748 00:22:31.748 job0: (groupid=0, jobs=1): err= 0: pid=93800: Wed May 15 13:40:44 2024 00:22:31.748 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:22:31.748 slat (nsec): min=9630, max=46120, avg=18705.00, stdev=5564.96 00:22:31.748 clat (usec): min=147, max=782, avg=389.62, stdev=100.28 00:22:31.748 lat (usec): min=166, max=795, avg=408.33, stdev=101.55 00:22:31.748 clat percentiles (usec): 00:22:31.748 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 306], 20.00th=[ 347], 00:22:31.748 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 388], 00:22:31.748 | 70.00th=[ 408], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 545], 00:22:31.748 | 99.00th=[ 570], 99.50th=[ 586], 99.90th=[ 611], 99.95th=[ 783], 00:22:31.748 | 99.99th=[ 783] 00:22:31.748 write: IOPS=1535, BW=6142KiB/s (6289kB/s)(6148KiB/1001msec); 0 zone resets 00:22:31.748 slat (usec): min=12, max=126, avg=26.10, stdev= 7.29 00:22:31.748 clat (usec): min=106, max=904, avg=212.09, stdev=80.49 00:22:31.748 lat (usec): min=132, max=930, avg=238.18, stdev=79.34 00:22:31.748 clat percentiles (usec): 00:22:31.748 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 127], 00:22:31.748 | 30.00th=[ 137], 40.00th=[ 180], 50.00th=[ 210], 60.00th=[ 249], 00:22:31.748 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 314], 95.00th=[ 334], 00:22:31.748 | 99.00th=[ 375], 99.50th=[ 400], 99.90th=[ 717], 99.95th=[ 906], 00:22:31.748 | 99.99th=[ 906] 00:22:31.748 bw ( KiB/s): min= 8175, max= 8175, per=26.63%, avg=8175.00, stdev= 0.00, samples=1 00:22:31.748 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:22:31.748 lat (usec) : 250=34.56%, 500=55.32%, 750=10.06%, 1000=0.07% 00:22:31.748 cpu : usr=1.90%, sys=5.30%, ctx=3075, majf=0, minf=3 00:22:31.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.748 issued rwts: total=1536,1537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:31.748 job1: (groupid=0, jobs=1): err= 0: pid=93801: Wed May 15 13:40:44 2024 00:22:31.748 read: IOPS=2718, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1000msec) 00:22:31.748 slat (nsec): min=13908, max=65486, avg=21046.73, stdev=6425.08 00:22:31.748 clat (usec): min=135, max=2026, avg=166.55, stdev=40.31 00:22:31.748 lat (usec): min=153, max=2045, avg=187.60, stdev=41.68 00:22:31.748 clat percentiles (usec): 00:22:31.748 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:22:31.748 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 167], 00:22:31.748 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 186], 95.00th=[ 194], 00:22:31.749 | 99.00th=[ 229], 99.50th=[ 243], 99.90th=[ 408], 99.95th=[ 545], 00:22:31.749 | 99.99th=[ 2024] 00:22:31.749 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:22:31.749 slat (usec): min=20, max=189, avg=29.19, stdev= 8.02 00:22:31.749 clat (usec): min=100, max=710, avg=125.76, stdev=16.33 00:22:31.749 lat (usec): min=124, max=735, avg=154.95, stdev=19.66 00:22:31.749 clat percentiles (usec): 00:22:31.749 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 117], 00:22:31.749 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 127], 00:22:31.749 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:22:31.749 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 249], 99.95th=[ 326], 00:22:31.749 | 99.99th=[ 709] 00:22:31.749 bw ( KiB/s): min=12263, max=12263, per=39.95%, avg=12263.00, stdev= 0.00, samples=1 00:22:31.749 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:22:31.749 lat (usec) : 250=99.79%, 500=0.16%, 750=0.03% 00:22:31.749 lat (msec) : 4=0.02% 00:22:31.749 cpu : usr=2.90%, sys=11.10%, ctx=5790, majf=0, minf=7 00:22:31.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.749 issued rwts: total=2718,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:31.749 job2: (groupid=0, jobs=1): err= 0: pid=93802: Wed May 15 13:40:44 2024 00:22:31.749 read: IOPS=1334, BW=5339KiB/s (5467kB/s)(5344KiB/1001msec) 00:22:31.749 slat (nsec): min=17075, max=72952, avg=28650.53, stdev=7526.44 00:22:31.749 clat (usec): min=154, max=1122, avg=388.16, stdev=58.57 00:22:31.749 lat (usec): min=182, max=1152, avg=416.81, stdev=60.04 00:22:31.749 clat percentiles (usec): 00:22:31.749 | 1.00th=[ 302], 5.00th=[ 334], 10.00th=[ 343], 20.00th=[ 351], 00:22:31.749 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 383], 00:22:31.749 | 70.00th=[ 400], 80.00th=[ 424], 90.00th=[ 469], 95.00th=[ 510], 00:22:31.749 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 1123], 00:22:31.749 | 99.99th=[ 1123] 00:22:31.749 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:31.749 slat (usec): min=24, max=105, avg=37.61, stdev= 8.02 00:22:31.749 clat (usec): min=124, max=603, avg=245.00, stdev=37.10 00:22:31.749 lat (usec): min=157, max=635, avg=282.61, stdev=38.34 00:22:31.749 clat percentiles (usec): 00:22:31.749 | 1.00th=[ 178], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 217], 00:22:31.749 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 253], 00:22:31.749 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 306], 00:22:31.749 | 99.00th=[ 363], 99.50th=[ 371], 99.90th=[ 529], 99.95th=[ 603], 00:22:31.749 | 99.99th=[ 603] 00:22:31.749 bw ( KiB/s): min= 8152, max= 8152, per=26.56%, avg=8152.00, stdev= 0.00, samples=1 00:22:31.749 iops : min= 2038, max= 2038, avg=2038.00, stdev= 0.00, samples=1 00:22:31.749 lat (usec) : 250=30.92%, 500=66.30%, 750=2.75% 00:22:31.749 lat (msec) : 2=0.03% 00:22:31.749 cpu : usr=1.80%, sys=7.30%, ctx=2872, majf=0, minf=11 00:22:31.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.749 issued rwts: total=1336,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:31.749 job3: (groupid=0, jobs=1): err= 0: pid=93803: Wed May 15 13:40:44 2024 00:22:31.749 read: IOPS=1395, BW=5582KiB/s (5716kB/s)(5588KiB/1001msec) 00:22:31.749 slat (usec): min=9, max=140, avg=18.39, stdev= 5.67 00:22:31.749 clat (usec): min=196, max=890, avg=381.52, stdev=58.46 00:22:31.749 lat (usec): min=209, max=920, avg=399.92, stdev=58.24 00:22:31.749 clat percentiles (usec): 00:22:31.749 | 1.00th=[ 241], 5.00th=[ 306], 10.00th=[ 326], 20.00th=[ 347], 00:22:31.749 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:22:31.749 | 70.00th=[ 396], 80.00th=[ 420], 90.00th=[ 461], 95.00th=[ 482], 00:22:31.749 | 99.00th=[ 510], 99.50th=[ 529], 99.90th=[ 881], 99.95th=[ 889], 00:22:31.749 | 99.99th=[ 889] 00:22:31.749 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:22:31.749 slat (usec): min=18, max=130, avg=28.35, stdev= 7.98 00:22:31.749 clat (usec): min=120, max=427, avg=254.51, stdev=41.01 00:22:31.749 lat (usec): min=143, max=450, avg=282.86, stdev=40.36 00:22:31.749 clat percentiles (usec): 00:22:31.749 | 1.00th=[ 131], 5.00th=[ 202], 10.00th=[ 219], 20.00th=[ 229], 00:22:31.749 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 260], 00:22:31.749 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 322], 00:22:31.749 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 404], 99.95th=[ 429], 00:22:31.749 | 99.99th=[ 429] 00:22:31.749 bw ( KiB/s): min= 8175, max= 8175, per=26.63%, avg=8175.00, stdev= 0.00, samples=1 00:22:31.749 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:22:31.749 lat (usec) : 250=25.37%, 500=73.71%, 750=0.78%, 1000=0.14% 00:22:31.749 cpu : usr=1.70%, sys=5.40%, ctx=2938, majf=0, minf=14 00:22:31.749 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:31.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:31.749 issued rwts: total=1397,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:31.749 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:31.749 00:22:31.749 Run status group 0 (all jobs): 00:22:31.749 READ: bw=27.3MiB/s (28.6MB/s), 5339KiB/s-10.6MiB/s (5467kB/s-11.1MB/s), io=27.3MiB (28.6MB), run=1000-1001msec 00:22:31.749 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1000-1001msec 00:22:31.749 00:22:31.749 Disk stats (read/write): 00:22:31.749 nvme0n1: ios=1220/1536, merge=0/0, ticks=475/319, in_queue=794, util=87.15% 00:22:31.749 nvme0n2: ios=2440/2560, merge=0/0, ticks=442/345, in_queue=787, util=88.14% 00:22:31.749 nvme0n3: ios=1024/1460, merge=0/0, ticks=412/379, in_queue=791, util=89.12% 00:22:31.749 nvme0n4: ios=1031/1536, merge=0/0, ticks=387/385, in_queue=772, util=89.59% 00:22:31.749 13:40:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:22:31.749 [global] 00:22:31.749 thread=1 00:22:31.749 invalidate=1 00:22:31.749 rw=randwrite 00:22:31.749 time_based=1 00:22:31.749 runtime=1 00:22:31.749 ioengine=libaio 00:22:31.749 direct=1 00:22:31.749 bs=4096 00:22:31.749 iodepth=1 00:22:31.749 norandommap=0 00:22:31.749 numjobs=1 00:22:31.749 00:22:31.749 verify_dump=1 00:22:31.749 verify_backlog=512 00:22:31.749 verify_state_save=0 00:22:31.749 do_verify=1 00:22:31.749 verify=crc32c-intel 00:22:31.749 [job0] 00:22:31.749 filename=/dev/nvme0n1 00:22:31.749 [job1] 00:22:31.749 filename=/dev/nvme0n2 00:22:31.749 [job2] 00:22:31.749 filename=/dev/nvme0n3 00:22:31.749 [job3] 00:22:31.749 filename=/dev/nvme0n4 00:22:31.749 Could not set queue depth (nvme0n1) 00:22:31.749 Could not set queue depth (nvme0n2) 00:22:31.749 Could not set queue depth (nvme0n3) 00:22:31.749 Could not set queue depth (nvme0n4) 00:22:31.749 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:31.749 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:31.749 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:31.749 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:31.749 fio-3.35 00:22:31.749 Starting 4 threads 00:22:33.124 00:22:33.124 job0: (groupid=0, jobs=1): err= 0: pid=93857: Wed May 15 13:40:45 2024 00:22:33.124 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:22:33.124 slat (nsec): min=14183, max=68384, avg=19764.37, stdev=4537.25 00:22:33.124 clat (usec): min=152, max=5635, avg=238.79, stdev=155.42 00:22:33.124 lat (usec): min=167, max=5686, avg=258.56, stdev=156.59 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 194], 00:22:33.124 | 30.00th=[ 206], 40.00th=[ 219], 50.00th=[ 233], 60.00th=[ 247], 00:22:33.124 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 293], 00:22:33.124 | 99.00th=[ 375], 99.50th=[ 627], 99.90th=[ 1221], 99.95th=[ 3982], 00:22:33.124 | 99.99th=[ 5604] 00:22:33.124 write: IOPS=2083, BW=8336KiB/s (8536kB/s)(8344KiB/1001msec); 0 zone resets 00:22:33.124 slat (usec): min=21, max=116, avg=29.54, stdev= 6.55 00:22:33.124 clat (usec): min=113, max=306, avg=191.24, stdev=31.75 00:22:33.124 lat (usec): min=138, max=404, avg=220.78, stdev=32.85 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 129], 5.00th=[ 143], 10.00th=[ 151], 20.00th=[ 161], 00:22:33.124 | 30.00th=[ 172], 40.00th=[ 182], 50.00th=[ 192], 60.00th=[ 200], 00:22:33.124 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 247], 00:22:33.124 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 293], 00:22:33.124 | 99.99th=[ 306] 00:22:33.124 bw ( KiB/s): min= 8584, max= 8584, per=30.52%, avg=8584.00, stdev= 0.00, samples=1 00:22:33.124 iops : min= 2146, max= 2146, avg=2146.00, stdev= 0.00, samples=1 00:22:33.124 lat (usec) : 250=79.83%, 500=19.84%, 750=0.22%, 1000=0.02% 00:22:33.124 lat (msec) : 2=0.05%, 4=0.02%, 10=0.02% 00:22:33.124 cpu : usr=2.30%, sys=7.40%, ctx=4134, majf=0, minf=15 00:22:33.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:33.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 issued rwts: total=2048,2086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:33.124 job1: (groupid=0, jobs=1): err= 0: pid=93858: Wed May 15 13:40:45 2024 00:22:33.124 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec) 00:22:33.124 slat (usec): min=13, max=200, avg=17.58, stdev= 5.22 00:22:33.124 clat (usec): min=53, max=556, avg=236.80, stdev=36.48 00:22:33.124 lat (usec): min=163, max=571, avg=254.37, stdev=36.56 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 163], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 202], 00:22:33.124 | 30.00th=[ 217], 40.00th=[ 229], 50.00th=[ 241], 60.00th=[ 251], 00:22:33.124 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 289], 00:22:33.124 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 379], 99.95th=[ 445], 00:22:33.124 | 99.99th=[ 553] 00:22:33.124 write: IOPS=2133, BW=8535KiB/s (8740kB/s)(8552KiB/1002msec); 0 zone resets 00:22:33.124 slat (usec): min=20, max=142, avg=26.49, stdev= 7.42 00:22:33.124 clat (usec): min=103, max=412, avg=193.37, stdev=32.65 00:22:33.124 lat (usec): min=123, max=465, avg=219.86, stdev=34.44 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 129], 5.00th=[ 143], 10.00th=[ 153], 20.00th=[ 163], 00:22:33.124 | 30.00th=[ 174], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 202], 00:22:33.124 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 247], 00:22:33.124 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 334], 99.95th=[ 367], 00:22:33.124 | 99.99th=[ 412] 00:22:33.124 bw ( KiB/s): min= 8192, max= 8912, per=30.41%, avg=8552.00, stdev=509.12, samples=2 00:22:33.124 iops : min= 2048, max= 2228, avg=2138.00, stdev=127.28, samples=2 00:22:33.124 lat (usec) : 100=0.02%, 250=78.14%, 500=21.81%, 750=0.02% 00:22:33.124 cpu : usr=1.80%, sys=6.89%, ctx=4188, majf=0, minf=9 00:22:33.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:33.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 issued rwts: total=2048,2138,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:33.124 job2: (groupid=0, jobs=1): err= 0: pid=93859: Wed May 15 13:40:45 2024 00:22:33.124 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:22:33.124 slat (nsec): min=14801, max=76365, avg=24622.82, stdev=6302.25 00:22:33.124 clat (usec): min=245, max=662, avg=425.07, stdev=35.43 00:22:33.124 lat (usec): min=266, max=687, avg=449.69, stdev=35.67 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 367], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 400], 00:22:33.124 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:22:33.124 | 70.00th=[ 437], 80.00th=[ 449], 90.00th=[ 465], 95.00th=[ 490], 00:22:33.124 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 652], 99.95th=[ 660], 00:22:33.124 | 99.99th=[ 660] 00:22:33.124 write: IOPS=1398, BW=5594KiB/s (5729kB/s)(5600KiB/1001msec); 0 zone resets 00:22:33.124 slat (usec): min=27, max=143, avg=46.45, stdev= 9.94 00:22:33.124 clat (usec): min=181, max=3782, avg=332.91, stdev=113.13 00:22:33.124 lat (usec): min=219, max=3832, avg=379.36, stdev=112.79 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 217], 5.00th=[ 249], 10.00th=[ 269], 20.00th=[ 281], 00:22:33.124 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 343], 00:22:33.124 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 404], 95.00th=[ 420], 00:22:33.124 | 99.00th=[ 474], 99.50th=[ 562], 99.90th=[ 1532], 99.95th=[ 3785], 00:22:33.124 | 99.99th=[ 3785] 00:22:33.124 bw ( KiB/s): min= 5728, max= 5728, per=20.37%, avg=5728.00, stdev= 0.00, samples=1 00:22:33.124 iops : min= 1432, max= 1432, avg=1432.00, stdev= 0.00, samples=1 00:22:33.124 lat (usec) : 250=2.97%, 500=95.34%, 750=1.61% 00:22:33.124 lat (msec) : 2=0.04%, 4=0.04% 00:22:33.124 cpu : usr=2.20%, sys=6.40%, ctx=2440, majf=0, minf=10 00:22:33.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:33.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 issued rwts: total=1024,1400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:33.124 job3: (groupid=0, jobs=1): err= 0: pid=93860: Wed May 15 13:40:45 2024 00:22:33.124 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:22:33.124 slat (nsec): min=19660, max=94680, avg=36421.85, stdev=9192.47 00:22:33.124 clat (usec): min=231, max=637, avg=408.24, stdev=35.86 00:22:33.124 lat (usec): min=252, max=684, avg=444.66, stdev=35.33 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 330], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 383], 00:22:33.124 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 412], 00:22:33.124 | 70.00th=[ 424], 80.00th=[ 433], 90.00th=[ 453], 95.00th=[ 469], 00:22:33.124 | 99.00th=[ 502], 99.50th=[ 515], 99.90th=[ 627], 99.95th=[ 635], 00:22:33.124 | 99.99th=[ 635] 00:22:33.124 write: IOPS=1419, BW=5678KiB/s (5815kB/s)(5684KiB/1001msec); 0 zone resets 00:22:33.124 slat (usec): min=27, max=154, avg=47.09, stdev= 9.91 00:22:33.124 clat (usec): min=174, max=1095, avg=328.97, stdev=64.67 00:22:33.124 lat (usec): min=205, max=1146, avg=376.06, stdev=63.53 00:22:33.124 clat percentiles (usec): 00:22:33.124 | 1.00th=[ 206], 5.00th=[ 245], 10.00th=[ 262], 20.00th=[ 281], 00:22:33.124 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 338], 00:22:33.124 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 400], 95.00th=[ 424], 00:22:33.124 | 99.00th=[ 469], 99.50th=[ 570], 99.90th=[ 938], 99.95th=[ 1090], 00:22:33.124 | 99.99th=[ 1090] 00:22:33.124 bw ( KiB/s): min= 5752, max= 5752, per=20.45%, avg=5752.00, stdev= 0.00, samples=1 00:22:33.124 iops : min= 1438, max= 1438, avg=1438.00, stdev= 0.00, samples=1 00:22:33.124 lat (usec) : 250=3.72%, 500=95.42%, 750=0.74%, 1000=0.08% 00:22:33.124 lat (msec) : 2=0.04% 00:22:33.124 cpu : usr=2.30%, sys=7.70%, ctx=2447, majf=0, minf=11 00:22:33.124 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:33.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:33.124 issued rwts: total=1024,1421,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:33.124 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:33.124 00:22:33.124 Run status group 0 (all jobs): 00:22:33.124 READ: bw=24.0MiB/s (25.1MB/s), 4092KiB/s-8184KiB/s (4190kB/s-8380kB/s), io=24.0MiB (25.2MB), run=1001-1002msec 00:22:33.124 WRITE: bw=27.5MiB/s (28.8MB/s), 5594KiB/s-8535KiB/s (5729kB/s-8740kB/s), io=27.5MiB (28.9MB), run=1001-1002msec 00:22:33.124 00:22:33.124 Disk stats (read/write): 00:22:33.124 nvme0n1: ios=1586/2013, merge=0/0, ticks=409/413, in_queue=822, util=87.98% 00:22:33.124 nvme0n2: ios=1587/2048, merge=0/0, ticks=412/419, in_queue=831, util=89.32% 00:22:33.124 nvme0n3: ios=1024/1024, merge=0/0, ticks=446/356, in_queue=802, util=88.90% 00:22:33.124 nvme0n4: ios=1024/1031, merge=0/0, ticks=420/357, in_queue=777, util=89.74% 00:22:33.124 13:40:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:22:33.124 [global] 00:22:33.124 thread=1 00:22:33.124 invalidate=1 00:22:33.124 rw=write 00:22:33.124 time_based=1 00:22:33.124 runtime=1 00:22:33.124 ioengine=libaio 00:22:33.124 direct=1 00:22:33.124 bs=4096 00:22:33.124 iodepth=128 00:22:33.124 norandommap=0 00:22:33.124 numjobs=1 00:22:33.124 00:22:33.124 verify_dump=1 00:22:33.124 verify_backlog=512 00:22:33.124 verify_state_save=0 00:22:33.124 do_verify=1 00:22:33.124 verify=crc32c-intel 00:22:33.124 [job0] 00:22:33.124 filename=/dev/nvme0n1 00:22:33.125 [job1] 00:22:33.125 filename=/dev/nvme0n2 00:22:33.125 [job2] 00:22:33.125 filename=/dev/nvme0n3 00:22:33.125 [job3] 00:22:33.125 filename=/dev/nvme0n4 00:22:33.125 Could not set queue depth (nvme0n1) 00:22:33.125 Could not set queue depth (nvme0n2) 00:22:33.125 Could not set queue depth (nvme0n3) 00:22:33.125 Could not set queue depth (nvme0n4) 00:22:33.125 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:33.125 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:33.125 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:33.125 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:33.125 fio-3.35 00:22:33.125 Starting 4 threads 00:22:34.501 00:22:34.501 job0: (groupid=0, jobs=1): err= 0: pid=93925: Wed May 15 13:40:47 2024 00:22:34.501 read: IOPS=1529, BW=6117KiB/s (6264kB/s)(6148KiB/1005msec) 00:22:34.501 slat (usec): min=4, max=10039, avg=282.06, stdev=1037.93 00:22:34.501 clat (usec): min=4136, max=42964, avg=35350.91, stdev=2300.36 00:22:34.501 lat (usec): min=5001, max=42988, avg=35632.96, stdev=2115.74 00:22:34.501 clat percentiles (usec): 00:22:34.501 | 1.00th=[29754], 5.00th=[31065], 10.00th=[31851], 20.00th=[34866], 00:22:34.501 | 30.00th=[35390], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:22:34.501 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36963], 95.00th=[38536], 00:22:34.501 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:22:34.501 | 99.99th=[42730] 00:22:34.501 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:22:34.501 slat (usec): min=10, max=11592, avg=268.65, stdev=1057.71 00:22:34.501 clat (usec): min=5006, max=42705, avg=34990.74, stdev=4463.35 00:22:34.501 lat (usec): min=5044, max=42726, avg=35259.39, stdev=4342.39 00:22:34.501 clat percentiles (usec): 00:22:34.501 | 1.00th=[11994], 5.00th=[28181], 10.00th=[32900], 20.00th=[33817], 00:22:34.501 | 30.00th=[34341], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:22:34.501 | 70.00th=[36439], 80.00th=[37487], 90.00th=[38536], 95.00th=[39584], 00:22:34.501 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:22:34.501 | 99.99th=[42730] 00:22:34.501 bw ( KiB/s): min= 7184, max= 8192, per=17.18%, avg=7688.00, stdev=712.76, samples=2 00:22:34.501 iops : min= 1796, max= 2048, avg=1922.00, stdev=178.19, samples=2 00:22:34.501 lat (msec) : 10=0.28%, 20=1.06%, 50=98.66% 00:22:34.501 cpu : usr=1.79%, sys=6.47%, ctx=552, majf=0, minf=21 00:22:34.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:22:34.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.501 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.501 job1: (groupid=0, jobs=1): err= 0: pid=93926: Wed May 15 13:40:47 2024 00:22:34.501 read: IOPS=1541, BW=6165KiB/s (6313kB/s)(6208KiB/1007msec) 00:22:34.501 slat (usec): min=4, max=7448, avg=277.79, stdev=1004.48 00:22:34.501 clat (usec): min=6025, max=43111, avg=34980.18, stdev=3499.49 00:22:34.501 lat (usec): min=7115, max=43129, avg=35257.96, stdev=3387.57 00:22:34.501 clat percentiles (usec): 00:22:34.501 | 1.00th=[ 8455], 5.00th=[30802], 10.00th=[31589], 20.00th=[34341], 00:22:34.501 | 30.00th=[34866], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:22:34.501 | 70.00th=[35914], 80.00th=[36439], 90.00th=[36963], 95.00th=[37487], 00:22:34.501 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:22:34.501 | 99.99th=[43254] 00:22:34.501 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:22:34.501 slat (usec): min=13, max=9251, avg=270.40, stdev=1065.28 00:22:34.501 clat (usec): min=9707, max=42736, avg=35062.79, stdev=3850.72 00:22:34.501 lat (usec): min=9727, max=43185, avg=35333.19, stdev=3722.61 00:22:34.501 clat percentiles (usec): 00:22:34.501 | 1.00th=[16319], 5.00th=[28443], 10.00th=[32375], 20.00th=[33817], 00:22:34.501 | 30.00th=[34866], 40.00th=[35390], 50.00th=[35914], 60.00th=[35914], 00:22:34.501 | 70.00th=[36439], 80.00th=[36963], 90.00th=[38011], 95.00th=[38536], 00:22:34.501 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:22:34.501 | 99.99th=[42730] 00:22:34.501 bw ( KiB/s): min= 7304, max= 8192, per=17.32%, avg=7748.00, stdev=627.91, samples=2 00:22:34.501 iops : min= 1826, max= 2048, avg=1937.00, stdev=156.98, samples=2 00:22:34.501 lat (msec) : 10=0.67%, 20=0.92%, 50=98.42% 00:22:34.501 cpu : usr=1.49%, sys=6.26%, ctx=598, majf=0, minf=11 00:22:34.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:22:34.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.501 issued rwts: total=1552,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.501 job2: (groupid=0, jobs=1): err= 0: pid=93927: Wed May 15 13:40:47 2024 00:22:34.501 read: IOPS=3378, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1004msec) 00:22:34.501 slat (usec): min=8, max=4841, avg=139.98, stdev=683.58 00:22:34.501 clat (usec): min=512, max=21551, avg=18233.71, stdev=2122.67 00:22:34.501 lat (usec): min=3931, max=25221, avg=18373.69, stdev=2025.61 00:22:34.501 clat percentiles (usec): 00:22:34.501 | 1.00th=[ 8717], 5.00th=[14877], 10.00th=[16450], 20.00th=[17695], 00:22:34.501 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18744], 60.00th=[19006], 00:22:34.501 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20055], 00:22:34.501 | 99.00th=[20841], 99.50th=[21365], 99.90th=[21627], 99.95th=[21627], 00:22:34.501 | 99.99th=[21627] 00:22:34.501 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:22:34.501 slat (usec): min=10, max=5325, avg=138.81, stdev=606.01 00:22:34.501 clat (usec): min=13188, max=22761, avg=18045.43, stdev=2028.69 00:22:34.501 lat (usec): min=13215, max=22797, avg=18184.24, stdev=2001.10 00:22:34.501 clat percentiles (usec): 00:22:34.501 | 1.00th=[13698], 5.00th=[14484], 10.00th=[15270], 20.00th=[15926], 00:22:34.501 | 30.00th=[16909], 40.00th=[17695], 50.00th=[18482], 60.00th=[19006], 00:22:34.501 | 70.00th=[19530], 80.00th=[19792], 90.00th=[20317], 95.00th=[20579], 00:22:34.501 | 99.00th=[22152], 99.50th=[22676], 99.90th=[22676], 99.95th=[22676], 00:22:34.501 | 99.99th=[22676] 00:22:34.501 bw ( KiB/s): min=13928, max=14744, per=32.04%, avg=14336.00, stdev=577.00, samples=2 00:22:34.501 iops : min= 3482, max= 3686, avg=3584.00, stdev=144.25, samples=2 00:22:34.501 lat (usec) : 750=0.01% 00:22:34.501 lat (msec) : 4=0.06%, 10=0.86%, 20=86.73%, 50=12.34% 00:22:34.501 cpu : usr=2.79%, sys=11.17%, ctx=335, majf=0, minf=11 00:22:34.501 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:34.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.501 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.501 issued rwts: total=3392,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.501 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.501 job3: (groupid=0, jobs=1): err= 0: pid=93928: Wed May 15 13:40:47 2024 00:22:34.501 read: IOPS=3314, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1003msec) 00:22:34.501 slat (usec): min=4, max=6092, avg=145.17, stdev=661.65 00:22:34.501 clat (usec): min=842, max=24350, avg=18285.80, stdev=2357.41 00:22:34.501 lat (usec): min=3215, max=24393, avg=18430.97, stdev=2404.43 00:22:34.501 clat percentiles (usec): 00:22:34.501 | 1.00th=[ 6325], 5.00th=[15139], 10.00th=[16450], 20.00th=[17695], 00:22:34.501 | 30.00th=[17957], 40.00th=[18220], 50.00th=[18482], 60.00th=[18744], 00:22:34.501 | 70.00th=[19006], 80.00th=[19268], 90.00th=[20841], 95.00th=[21365], 00:22:34.501 | 99.00th=[22938], 99.50th=[23462], 99.90th=[24249], 99.95th=[24249], 00:22:34.501 | 99.99th=[24249] 00:22:34.501 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:22:34.501 slat (usec): min=12, max=6400, avg=137.97, stdev=645.13 00:22:34.502 clat (usec): min=12770, max=24706, avg=18347.84, stdev=1562.25 00:22:34.502 lat (usec): min=12800, max=24765, avg=18485.81, stdev=1630.65 00:22:34.502 clat percentiles (usec): 00:22:34.502 | 1.00th=[13566], 5.00th=[15795], 10.00th=[16712], 20.00th=[17433], 00:22:34.502 | 30.00th=[17695], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:22:34.502 | 70.00th=[19006], 80.00th=[19268], 90.00th=[20055], 95.00th=[20841], 00:22:34.502 | 99.00th=[23200], 99.50th=[23200], 99.90th=[23725], 99.95th=[24249], 00:22:34.502 | 99.99th=[24773] 00:22:34.502 bw ( KiB/s): min=14064, max=14608, per=32.04%, avg=14336.00, stdev=384.67, samples=2 00:22:34.502 iops : min= 3516, max= 3652, avg=3584.00, stdev=96.17, samples=2 00:22:34.502 lat (usec) : 1000=0.01% 00:22:34.502 lat (msec) : 4=0.26%, 10=0.61%, 20=86.93%, 50=12.19% 00:22:34.502 cpu : usr=3.59%, sys=8.38%, ctx=345, majf=0, minf=9 00:22:34.502 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:34.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:34.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:34.502 issued rwts: total=3324,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:34.502 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:34.502 00:22:34.502 Run status group 0 (all jobs): 00:22:34.502 READ: bw=38.0MiB/s (39.9MB/s), 6117KiB/s-13.2MiB/s (6264kB/s-13.8MB/s), io=38.3MiB (40.2MB), run=1003-1007msec 00:22:34.502 WRITE: bw=43.7MiB/s (45.8MB/s), 8135KiB/s-14.0MiB/s (8330kB/s-14.6MB/s), io=44.0MiB (46.1MB), run=1003-1007msec 00:22:34.502 00:22:34.502 Disk stats (read/write): 00:22:34.502 nvme0n1: ios=1586/1559, merge=0/0, ticks=13426/12356, in_queue=25782, util=89.47% 00:22:34.502 nvme0n2: ios=1585/1560, merge=0/0, ticks=13376/12536, in_queue=25912, util=90.30% 00:22:34.502 nvme0n3: ios=2960/3072, merge=0/0, ticks=12951/11956, in_queue=24907, util=89.32% 00:22:34.502 nvme0n4: ios=2937/3072, merge=0/0, ticks=17480/16327, in_queue=33807, util=90.40% 00:22:34.502 13:40:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:22:34.502 [global] 00:22:34.502 thread=1 00:22:34.502 invalidate=1 00:22:34.502 rw=randwrite 00:22:34.502 time_based=1 00:22:34.502 runtime=1 00:22:34.502 ioengine=libaio 00:22:34.502 direct=1 00:22:34.502 bs=4096 00:22:34.502 iodepth=128 00:22:34.502 norandommap=0 00:22:34.502 numjobs=1 00:22:34.502 00:22:34.502 verify_dump=1 00:22:34.502 verify_backlog=512 00:22:34.502 verify_state_save=0 00:22:34.502 do_verify=1 00:22:34.502 verify=crc32c-intel 00:22:34.502 [job0] 00:22:34.502 filename=/dev/nvme0n1 00:22:34.502 [job1] 00:22:34.502 filename=/dev/nvme0n2 00:22:34.502 [job2] 00:22:34.502 filename=/dev/nvme0n3 00:22:34.502 [job3] 00:22:34.502 filename=/dev/nvme0n4 00:22:34.502 Could not set queue depth (nvme0n1) 00:22:34.502 Could not set queue depth (nvme0n2) 00:22:34.502 Could not set queue depth (nvme0n3) 00:22:34.502 Could not set queue depth (nvme0n4) 00:22:34.502 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:34.502 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:34.502 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:34.502 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:22:34.502 fio-3.35 00:22:34.502 Starting 4 threads 00:22:35.878 00:22:35.878 job0: (groupid=0, jobs=1): err= 0: pid=93981: Wed May 15 13:40:48 2024 00:22:35.878 read: IOPS=4301, BW=16.8MiB/s (17.6MB/s)(17.0MiB/1009msec) 00:22:35.878 slat (usec): min=4, max=18227, avg=122.63, stdev=797.99 00:22:35.878 clat (usec): min=5041, max=35243, avg=15345.18, stdev=4819.18 00:22:35.878 lat (usec): min=5055, max=35260, avg=15467.81, stdev=4861.61 00:22:35.878 clat percentiles (usec): 00:22:35.878 | 1.00th=[ 6456], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11338], 00:22:35.878 | 30.00th=[12387], 40.00th=[13042], 50.00th=[14615], 60.00th=[15795], 00:22:35.878 | 70.00th=[16581], 80.00th=[18744], 90.00th=[21365], 95.00th=[24511], 00:22:35.878 | 99.00th=[31065], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:22:35.878 | 99.99th=[35390] 00:22:35.878 write: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec); 0 zone resets 00:22:35.878 slat (usec): min=5, max=15774, avg=93.52, stdev=442.23 00:22:35.878 clat (usec): min=4397, max=35186, avg=13289.99, stdev=3582.57 00:22:35.878 lat (usec): min=4418, max=35197, avg=13383.50, stdev=3620.08 00:22:35.878 clat percentiles (usec): 00:22:35.878 | 1.00th=[ 4883], 5.00th=[ 6390], 10.00th=[ 8225], 20.00th=[11207], 00:22:35.878 | 30.00th=[11863], 40.00th=[12125], 50.00th=[13173], 60.00th=[13435], 00:22:35.878 | 70.00th=[16057], 80.00th=[16909], 90.00th=[17957], 95.00th=[18482], 00:22:35.878 | 99.00th=[19530], 99.50th=[19792], 99.90th=[31065], 99.95th=[32375], 00:22:35.878 | 99.99th=[35390] 00:22:35.878 bw ( KiB/s): min=16384, max=20480, per=29.91%, avg=18432.00, stdev=2896.31, samples=2 00:22:35.878 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:22:35.878 lat (msec) : 10=11.48%, 20=80.93%, 50=7.59% 00:22:35.878 cpu : usr=4.46%, sys=11.31%, ctx=686, majf=0, minf=6 00:22:35.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:35.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.878 issued rwts: total=4340,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.878 job1: (groupid=0, jobs=1): err= 0: pid=93982: Wed May 15 13:40:48 2024 00:22:35.878 read: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec) 00:22:35.878 slat (usec): min=4, max=20746, avg=141.72, stdev=1041.34 00:22:35.878 clat (usec): min=6340, max=43274, avg=17994.21, stdev=6697.74 00:22:35.878 lat (usec): min=6353, max=43302, avg=18135.92, stdev=6767.02 00:22:35.878 clat percentiles (usec): 00:22:35.878 | 1.00th=[ 8356], 5.00th=[10028], 10.00th=[10945], 20.00th=[11863], 00:22:35.878 | 30.00th=[12256], 40.00th=[14877], 50.00th=[16909], 60.00th=[20841], 00:22:35.878 | 70.00th=[21627], 80.00th=[22676], 90.00th=[25560], 95.00th=[31065], 00:22:35.878 | 99.00th=[39584], 99.50th=[40633], 99.90th=[42206], 99.95th=[43254], 00:22:35.878 | 99.99th=[43254] 00:22:35.878 write: IOPS=4049, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:22:35.878 slat (usec): min=4, max=18033, avg=112.49, stdev=647.80 00:22:35.878 clat (usec): min=3532, max=41971, avg=15524.46, stdev=5451.91 00:22:35.878 lat (usec): min=3597, max=41982, avg=15636.96, stdev=5511.14 00:22:35.878 clat percentiles (usec): 00:22:35.878 | 1.00th=[ 5145], 5.00th=[ 6783], 10.00th=[ 8979], 20.00th=[11207], 00:22:35.878 | 30.00th=[11731], 40.00th=[12125], 50.00th=[15270], 60.00th=[16319], 00:22:35.878 | 70.00th=[20579], 80.00th=[21890], 90.00th=[22414], 95.00th=[22676], 00:22:35.878 | 99.00th=[24249], 99.50th=[24773], 99.90th=[40109], 99.95th=[41681], 00:22:35.878 | 99.99th=[42206] 00:22:35.878 bw ( KiB/s): min=12312, max=19440, per=25.76%, avg=15876.00, stdev=5040.26, samples=2 00:22:35.878 iops : min= 3078, max= 4860, avg=3969.00, stdev=1260.06, samples=2 00:22:35.878 lat (msec) : 4=0.20%, 10=9.68%, 20=53.35%, 50=36.78% 00:22:35.878 cpu : usr=3.66%, sys=9.90%, ctx=523, majf=0, minf=7 00:22:35.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:35.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.878 issued rwts: total=3584,4094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.878 job2: (groupid=0, jobs=1): err= 0: pid=93983: Wed May 15 13:40:48 2024 00:22:35.878 read: IOPS=3071, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1010msec) 00:22:35.878 slat (usec): min=5, max=16540, avg=161.85, stdev=1004.08 00:22:35.878 clat (usec): min=4216, max=38801, avg=20024.85, stdev=6011.89 00:22:35.878 lat (usec): min=6743, max=38851, avg=20186.71, stdev=6066.05 00:22:35.878 clat percentiles (usec): 00:22:35.878 | 1.00th=[ 9110], 5.00th=[12125], 10.00th=[13042], 20.00th=[14484], 00:22:35.878 | 30.00th=[16319], 40.00th=[17695], 50.00th=[19006], 60.00th=[20579], 00:22:35.878 | 70.00th=[22152], 80.00th=[26084], 90.00th=[28705], 95.00th=[30802], 00:22:35.878 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:22:35.878 | 99.99th=[39060] 00:22:35.878 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:22:35.878 slat (usec): min=5, max=16011, avg=131.92, stdev=685.55 00:22:35.878 clat (usec): min=5927, max=36907, avg=18367.59, stdev=5148.67 00:22:35.878 lat (usec): min=5954, max=37778, avg=18499.51, stdev=5211.56 00:22:35.878 clat percentiles (usec): 00:22:35.878 | 1.00th=[ 6783], 5.00th=[ 8848], 10.00th=[10945], 20.00th=[14484], 00:22:35.878 | 30.00th=[15270], 40.00th=[17957], 50.00th=[19006], 60.00th=[19792], 00:22:35.878 | 70.00th=[20579], 80.00th=[21890], 90.00th=[24511], 95.00th=[26870], 00:22:35.878 | 99.00th=[30802], 99.50th=[32375], 99.90th=[36439], 99.95th=[36963], 00:22:35.878 | 99.99th=[36963] 00:22:35.878 bw ( KiB/s): min=13568, max=14348, per=22.65%, avg=13958.00, stdev=551.54, samples=2 00:22:35.878 iops : min= 3392, max= 3587, avg=3489.50, stdev=137.89, samples=2 00:22:35.878 lat (msec) : 10=5.00%, 20=54.29%, 50=40.71% 00:22:35.878 cpu : usr=5.15%, sys=7.63%, ctx=551, majf=0, minf=7 00:22:35.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:35.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.878 issued rwts: total=3102,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.878 job3: (groupid=0, jobs=1): err= 0: pid=93984: Wed May 15 13:40:48 2024 00:22:35.878 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:22:35.878 slat (usec): min=3, max=16318, avg=163.78, stdev=1030.81 00:22:35.878 clat (usec): min=6369, max=43384, avg=20884.29, stdev=6115.63 00:22:35.878 lat (usec): min=6382, max=43396, avg=21048.07, stdev=6174.00 00:22:35.878 clat percentiles (usec): 00:22:35.878 | 1.00th=[ 8586], 5.00th=[12780], 10.00th=[13173], 20.00th=[15008], 00:22:35.878 | 30.00th=[16909], 40.00th=[19530], 50.00th=[20055], 60.00th=[21365], 00:22:35.878 | 70.00th=[23462], 80.00th=[26084], 90.00th=[28967], 95.00th=[32637], 00:22:35.878 | 99.00th=[36439], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:22:35.878 | 99.99th=[43254] 00:22:35.878 write: IOPS=3261, BW=12.7MiB/s (13.4MB/s)(12.9MiB/1009msec); 0 zone resets 00:22:35.878 slat (usec): min=5, max=17950, avg=144.13, stdev=778.35 00:22:35.878 clat (usec): min=3812, max=39674, avg=19356.58, stdev=5148.42 00:22:35.879 lat (usec): min=5098, max=39737, avg=19500.72, stdev=5221.51 00:22:35.879 clat percentiles (usec): 00:22:35.879 | 1.00th=[ 6718], 5.00th=[ 9503], 10.00th=[12125], 20.00th=[15139], 00:22:35.879 | 30.00th=[17171], 40.00th=[19268], 50.00th=[20317], 60.00th=[20841], 00:22:35.879 | 70.00th=[21627], 80.00th=[22414], 90.00th=[25822], 95.00th=[26870], 00:22:35.879 | 99.00th=[33424], 99.50th=[35390], 99.90th=[36439], 99.95th=[39584], 00:22:35.879 | 99.99th=[39584] 00:22:35.879 bw ( KiB/s): min=12488, max=12840, per=20.55%, avg=12664.00, stdev=248.90, samples=2 00:22:35.879 iops : min= 3122, max= 3210, avg=3166.00, stdev=62.23, samples=2 00:22:35.879 lat (msec) : 4=0.02%, 10=3.57%, 20=43.25%, 50=53.17% 00:22:35.879 cpu : usr=3.37%, sys=8.33%, ctx=539, majf=0, minf=5 00:22:35.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:35.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:35.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:35.879 issued rwts: total=3072,3291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:35.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:35.879 00:22:35.879 Run status group 0 (all jobs): 00:22:35.879 READ: bw=54.5MiB/s (57.1MB/s), 11.9MiB/s-16.8MiB/s (12.5MB/s-17.6MB/s), io=55.1MiB (57.7MB), run=1009-1011msec 00:22:35.879 WRITE: bw=60.2MiB/s (63.1MB/s), 12.7MiB/s-17.8MiB/s (13.4MB/s-18.7MB/s), io=60.8MiB (63.8MB), run=1009-1011msec 00:22:35.879 00:22:35.879 Disk stats (read/write): 00:22:35.879 nvme0n1: ios=3634/3671, merge=0/0, ticks=53854/49438, in_queue=103292, util=89.18% 00:22:35.879 nvme0n2: ios=2917/3072, merge=0/0, ticks=53259/50523, in_queue=103782, util=89.94% 00:22:35.879 nvme0n3: ios=2812/3072, merge=0/0, ticks=48689/48615, in_queue=97304, util=88.98% 00:22:35.879 nvme0n4: ios=2560/3070, merge=0/0, ticks=46597/50606, in_queue=97203, util=89.32% 00:22:35.879 13:40:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:22:35.879 13:40:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=93997 00:22:35.879 13:40:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:22:35.879 13:40:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:22:35.879 [global] 00:22:35.879 thread=1 00:22:35.879 invalidate=1 00:22:35.879 rw=read 00:22:35.879 time_based=1 00:22:35.879 runtime=10 00:22:35.879 ioengine=libaio 00:22:35.879 direct=1 00:22:35.879 bs=4096 00:22:35.879 iodepth=1 00:22:35.879 norandommap=1 00:22:35.879 numjobs=1 00:22:35.879 00:22:35.879 [job0] 00:22:35.879 filename=/dev/nvme0n1 00:22:35.879 [job1] 00:22:35.879 filename=/dev/nvme0n2 00:22:35.879 [job2] 00:22:35.879 filename=/dev/nvme0n3 00:22:35.879 [job3] 00:22:35.879 filename=/dev/nvme0n4 00:22:35.879 Could not set queue depth (nvme0n1) 00:22:35.879 Could not set queue depth (nvme0n2) 00:22:35.879 Could not set queue depth (nvme0n3) 00:22:35.879 Could not set queue depth (nvme0n4) 00:22:35.879 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.879 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.879 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.879 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:35.879 fio-3.35 00:22:35.879 Starting 4 threads 00:22:39.169 13:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:22:39.169 fio: pid=94046, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:39.169 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=62955520, buflen=4096 00:22:39.169 13:40:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:22:39.169 fio: pid=94045, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:39.169 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=69312512, buflen=4096 00:22:39.169 13:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:39.169 13:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:22:39.427 fio: pid=94038, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:39.427 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=46272512, buflen=4096 00:22:39.427 13:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:39.427 13:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:22:39.684 fio: pid=94041, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:22:39.684 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=51507200, buflen=4096 00:22:39.684 00:22:39.684 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=94038: Wed May 15 13:40:52 2024 00:22:39.684 read: IOPS=3229, BW=12.6MiB/s (13.2MB/s)(44.1MiB/3498msec) 00:22:39.684 slat (usec): min=8, max=12841, avg=22.66, stdev=206.83 00:22:39.684 clat (usec): min=146, max=3573, avg=284.82, stdev=73.85 00:22:39.684 lat (usec): min=161, max=13184, avg=307.48, stdev=220.31 00:22:39.684 clat percentiles (usec): 00:22:39.684 | 1.00th=[ 178], 5.00th=[ 251], 10.00th=[ 260], 20.00th=[ 265], 00:22:39.684 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:22:39.684 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 314], 95.00th=[ 334], 00:22:39.684 | 99.00th=[ 424], 99.50th=[ 529], 99.90th=[ 955], 99.95th=[ 1958], 00:22:39.684 | 99.99th=[ 2933] 00:22:39.684 bw ( KiB/s): min=12432, max=13352, per=21.77%, avg=13037.33, stdev=340.49, samples=6 00:22:39.684 iops : min= 3108, max= 3338, avg=3259.33, stdev=85.12, samples=6 00:22:39.684 lat (usec) : 250=4.51%, 500=94.90%, 750=0.41%, 1000=0.07% 00:22:39.684 lat (msec) : 2=0.05%, 4=0.04% 00:22:39.684 cpu : usr=1.40%, sys=5.12%, ctx=11307, majf=0, minf=1 00:22:39.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 issued rwts: total=11298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:39.684 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=94041: Wed May 15 13:40:52 2024 00:22:39.684 read: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(49.1MiB/3752msec) 00:22:39.684 slat (usec): min=8, max=11483, avg=22.63, stdev=180.31 00:22:39.684 clat (usec): min=4, max=5169, avg=273.63, stdev=101.89 00:22:39.684 lat (usec): min=149, max=11733, avg=296.26, stdev=207.38 00:22:39.684 clat percentiles (usec): 00:22:39.684 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 182], 20.00th=[ 262], 00:22:39.684 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:22:39.684 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 326], 00:22:39.684 | 99.00th=[ 420], 99.50th=[ 510], 99.90th=[ 1188], 99.95th=[ 2540], 00:22:39.684 | 99.99th=[ 4228] 00:22:39.684 bw ( KiB/s): min=12832, max=13408, per=21.89%, avg=13105.86, stdev=238.10, samples=7 00:22:39.684 iops : min= 3208, max= 3354, avg=3276.71, stdev=59.99, samples=7 00:22:39.684 lat (usec) : 10=0.01%, 250=13.53%, 500=85.91%, 750=0.37%, 1000=0.04% 00:22:39.684 lat (msec) : 2=0.07%, 4=0.05%, 10=0.02% 00:22:39.684 cpu : usr=1.39%, sys=5.01%, ctx=12607, majf=0, minf=1 00:22:39.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 issued rwts: total=12576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:39.684 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=94045: Wed May 15 13:40:52 2024 00:22:39.684 read: IOPS=5200, BW=20.3MiB/s (21.3MB/s)(66.1MiB/3254msec) 00:22:39.684 slat (usec): min=13, max=12435, avg=17.27, stdev=117.09 00:22:39.684 clat (usec): min=122, max=7340, avg=173.30, stdev=65.25 00:22:39.684 lat (usec): min=151, max=12602, avg=190.58, stdev=134.02 00:22:39.684 clat percentiles (usec): 00:22:39.684 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:22:39.684 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:22:39.684 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:22:39.684 | 99.00th=[ 210], 99.50th=[ 221], 99.90th=[ 375], 99.95th=[ 938], 00:22:39.684 | 99.99th=[ 1991] 00:22:39.684 bw ( KiB/s): min=20792, max=21088, per=34.95%, avg=20926.67, stdev=108.45, samples=6 00:22:39.684 iops : min= 5198, max= 5272, avg=5231.67, stdev=27.11, samples=6 00:22:39.684 lat (usec) : 250=99.80%, 500=0.12%, 750=0.01%, 1000=0.01% 00:22:39.684 lat (msec) : 2=0.04%, 10=0.01% 00:22:39.684 cpu : usr=1.66%, sys=6.76%, ctx=16926, majf=0, minf=1 00:22:39.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 issued rwts: total=16923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:39.684 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=94046: Wed May 15 13:40:52 2024 00:22:39.684 read: IOPS=5156, BW=20.1MiB/s (21.1MB/s)(60.0MiB/2981msec) 00:22:39.684 slat (usec): min=13, max=551, avg=16.00, stdev= 5.13 00:22:39.684 clat (usec): min=62, max=2296, avg=176.05, stdev=31.28 00:22:39.684 lat (usec): min=166, max=2312, avg=192.05, stdev=31.84 00:22:39.684 clat percentiles (usec): 00:22:39.684 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:22:39.684 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:22:39.684 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:22:39.684 | 99.00th=[ 212], 99.50th=[ 223], 99.90th=[ 338], 99.95th=[ 449], 00:22:39.684 | 99.99th=[ 1991] 00:22:39.684 bw ( KiB/s): min=20248, max=20968, per=34.46%, avg=20635.20, stdev=294.52, samples=5 00:22:39.684 iops : min= 5062, max= 5242, avg=5158.80, stdev=73.63, samples=5 00:22:39.684 lat (usec) : 100=0.01%, 250=99.68%, 500=0.26%, 750=0.02% 00:22:39.684 lat (msec) : 2=0.02%, 4=0.01% 00:22:39.684 cpu : usr=1.88%, sys=6.91%, ctx=15373, majf=0, minf=1 00:22:39.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:39.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:39.684 issued rwts: total=15371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:39.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:39.684 00:22:39.684 Run status group 0 (all jobs): 00:22:39.684 READ: bw=58.5MiB/s (61.3MB/s), 12.6MiB/s-20.3MiB/s (13.2MB/s-21.3MB/s), io=219MiB (230MB), run=2981-3752msec 00:22:39.684 00:22:39.684 Disk stats (read/write): 00:22:39.684 nvme0n1: ios=10862/0, merge=0/0, ticks=3158/0, in_queue=3158, util=95.37% 00:22:39.684 nvme0n2: ios=11855/0, merge=0/0, ticks=3379/0, in_queue=3379, util=95.64% 00:22:39.684 nvme0n3: ios=16185/0, merge=0/0, ticks=2882/0, in_queue=2882, util=96.16% 00:22:39.684 nvme0n4: ios=14830/0, merge=0/0, ticks=2676/0, in_queue=2676, util=96.70% 00:22:39.684 13:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:39.684 13:40:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:22:39.942 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:39.942 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:22:40.201 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:40.201 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:22:40.458 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:40.458 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:22:40.716 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:22:40.716 13:40:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 93997 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:41.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:22:41.283 nvmf hotplug test: fio failed as expected 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:22:41.283 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:41.542 rmmod nvme_tcp 00:22:41.542 rmmod nvme_fabrics 00:22:41.542 rmmod nvme_keyring 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 93503 ']' 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 93503 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 93503 ']' 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 93503 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93503 00:22:41.542 killing process with pid 93503 00:22:41.542 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:41.543 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:41.543 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93503' 00:22:41.543 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 93503 00:22:41.543 [2024-05-15 13:40:54.521556] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:41.543 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 93503 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:41.801 00:22:41.801 real 0m19.692s 00:22:41.801 user 1m15.990s 00:22:41.801 sys 0m8.528s 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:41.801 13:40:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.801 ************************************ 00:22:41.801 END TEST nvmf_fio_target 00:22:41.801 ************************************ 00:22:41.801 13:40:54 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:41.801 13:40:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:41.801 13:40:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:41.801 13:40:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:41.801 ************************************ 00:22:41.801 START TEST nvmf_bdevio 00:22:41.801 ************************************ 00:22:41.801 13:40:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:22:42.059 * Looking for test storage... 00:22:42.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:42.059 13:40:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:42.060 Cannot find device "nvmf_tgt_br" 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:42.060 Cannot find device "nvmf_tgt_br2" 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:42.060 Cannot find device "nvmf_tgt_br" 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:22:42.060 13:40:54 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:42.060 Cannot find device "nvmf_tgt_br2" 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:42.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:42.060 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:42.060 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:42.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:42.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:22:42.318 00:22:42.318 --- 10.0.0.2 ping statistics --- 00:22:42.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.318 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:42.318 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:42.318 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:42.318 00:22:42.318 --- 10.0.0.3 ping statistics --- 00:22:42.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.318 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:42.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:42.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:42.318 00:22:42.318 --- 10.0.0.1 ping statistics --- 00:22:42.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:42.318 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:42.318 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=94364 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 94364 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 94364 ']' 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:42.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:42.319 13:40:55 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:42.319 [2024-05-15 13:40:55.324274] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:22:42.319 [2024-05-15 13:40:55.324376] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.576 [2024-05-15 13:40:55.450223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:42.576 [2024-05-15 13:40:55.469081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:42.576 [2024-05-15 13:40:55.576497] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:42.576 [2024-05-15 13:40:55.576554] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:42.576 [2024-05-15 13:40:55.576568] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:42.576 [2024-05-15 13:40:55.576579] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:42.576 [2024-05-15 13:40:55.576588] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:42.576 [2024-05-15 13:40:55.576769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:42.576 [2024-05-15 13:40:55.577403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:42.576 [2024-05-15 13:40:55.577505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:42.576 [2024-05-15 13:40:55.577513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:43.507 [2024-05-15 13:40:56.387123] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:43.507 Malloc0 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:43.507 [2024-05-15 13:40:56.459508] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:43.507 [2024-05-15 13:40:56.460017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:43.507 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:43.507 { 00:22:43.507 "params": { 00:22:43.507 "name": "Nvme$subsystem", 00:22:43.507 "trtype": "$TEST_TRANSPORT", 00:22:43.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:43.507 "adrfam": "ipv4", 00:22:43.507 "trsvcid": "$NVMF_PORT", 00:22:43.508 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:43.508 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:43.508 "hdgst": ${hdgst:-false}, 00:22:43.508 "ddgst": ${ddgst:-false} 00:22:43.508 }, 00:22:43.508 "method": "bdev_nvme_attach_controller" 00:22:43.508 } 00:22:43.508 EOF 00:22:43.508 )") 00:22:43.508 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:22:43.508 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:22:43.508 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:22:43.508 13:40:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:43.508 "params": { 00:22:43.508 "name": "Nvme1", 00:22:43.508 "trtype": "tcp", 00:22:43.508 "traddr": "10.0.0.2", 00:22:43.508 "adrfam": "ipv4", 00:22:43.508 "trsvcid": "4420", 00:22:43.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.508 "hdgst": false, 00:22:43.508 "ddgst": false 00:22:43.508 }, 00:22:43.508 "method": "bdev_nvme_attach_controller" 00:22:43.508 }' 00:22:43.508 [2024-05-15 13:40:56.518532] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:22:43.508 [2024-05-15 13:40:56.518635] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94418 ] 00:22:43.765 [2024-05-15 13:40:56.643550] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:43.765 [2024-05-15 13:40:56.687961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:43.765 [2024-05-15 13:40:56.796878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.765 [2024-05-15 13:40:56.797011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.765 [2024-05-15 13:40:56.797251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.023 I/O targets: 00:22:44.023 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:44.023 00:22:44.023 00:22:44.023 CUnit - A unit testing framework for C - Version 2.1-3 00:22:44.023 http://cunit.sourceforge.net/ 00:22:44.023 00:22:44.023 00:22:44.023 Suite: bdevio tests on: Nvme1n1 00:22:44.023 Test: blockdev write read block ...passed 00:22:44.023 Test: blockdev write zeroes read block ...passed 00:22:44.023 Test: blockdev write zeroes read no split ...passed 00:22:44.023 Test: blockdev write zeroes read split ...passed 00:22:44.023 Test: blockdev write zeroes read split partial ...passed 00:22:44.023 Test: blockdev reset ...[2024-05-15 13:40:57.094412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:44.023 [2024-05-15 13:40:57.094660] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142e1c0 (9): Bad file descriptor 00:22:44.023 [2024-05-15 13:40:57.106304] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:44.023 passed 00:22:44.023 Test: blockdev write read 8 blocks ...passed 00:22:44.023 Test: blockdev write read size > 128k ...passed 00:22:44.023 Test: blockdev write read invalid size ...passed 00:22:44.280 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:44.280 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:44.280 Test: blockdev write read max offset ...passed 00:22:44.280 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:44.280 Test: blockdev writev readv 8 blocks ...passed 00:22:44.280 Test: blockdev writev readv 30 x 1block ...passed 00:22:44.280 Test: blockdev writev readv block ...passed 00:22:44.280 Test: blockdev writev readv size > 128k ...passed 00:22:44.280 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:44.280 Test: blockdev comparev and writev ...[2024-05-15 13:40:57.278982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.279032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.279053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.279064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.279433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.279456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.279474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.279484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.279780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.279799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.279816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.279827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.280107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.280124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.280140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:44.281 [2024-05-15 13:40:57.280150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:44.281 passed 00:22:44.281 Test: blockdev nvme passthru rw ...passed 00:22:44.281 Test: blockdev nvme passthru vendor specific ...[2024-05-15 13:40:57.364936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.281 [2024-05-15 13:40:57.364987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.365111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.281 [2024-05-15 13:40:57.365134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.365248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.281 [2024-05-15 13:40:57.365269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:44.281 [2024-05-15 13:40:57.365380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:44.281 [2024-05-15 13:40:57.365402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:44.281 passed 00:22:44.539 Test: blockdev nvme admin passthru ...passed 00:22:44.539 Test: blockdev copy ...passed 00:22:44.539 00:22:44.539 Run Summary: Type Total Ran Passed Failed Inactive 00:22:44.539 suites 1 1 n/a 0 0 00:22:44.539 tests 23 23 23 0 0 00:22:44.539 asserts 152 152 152 0 n/a 00:22:44.539 00:22:44.539 Elapsed time = 0.898 seconds 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.539 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.797 rmmod nvme_tcp 00:22:44.797 rmmod nvme_fabrics 00:22:44.797 rmmod nvme_keyring 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 94364 ']' 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 94364 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 94364 ']' 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 94364 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94364 00:22:44.797 killing process with pid 94364 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94364' 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 94364 00:22:44.797 [2024-05-15 13:40:57.727535] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:44.797 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 94364 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.056 13:40:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.056 13:40:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:45.056 00:22:45.056 real 0m3.175s 00:22:45.056 user 0m11.577s 00:22:45.056 sys 0m0.788s 00:22:45.056 13:40:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:45.056 13:40:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:22:45.056 ************************************ 00:22:45.056 END TEST nvmf_bdevio 00:22:45.056 ************************************ 00:22:45.056 13:40:58 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:45.056 13:40:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:45.056 13:40:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:45.056 13:40:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:45.056 ************************************ 00:22:45.056 START TEST nvmf_auth_target 00:22:45.056 ************************************ 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:22:45.056 * Looking for test storage... 00:22:45.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:45.056 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:45.314 Cannot find device "nvmf_tgt_br" 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:45.314 Cannot find device "nvmf_tgt_br2" 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:45.314 Cannot find device "nvmf_tgt_br" 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:45.314 Cannot find device "nvmf_tgt_br2" 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:45.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:45.314 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:45.314 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:45.315 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:45.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:22:45.573 00:22:45.573 --- 10.0.0.2 ping statistics --- 00:22:45.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.573 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:45.573 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:45.573 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:22:45.573 00:22:45.573 --- 10.0.0.3 ping statistics --- 00:22:45.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.573 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:45.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:45.573 00:22:45.573 --- 10.0.0.1 ping statistics --- 00:22:45.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.573 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:45.573 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=94604 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 94604 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 94604 ']' 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:45.574 13:40:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=94648 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:46.583 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:46.841 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c20a3427ee592af1c98572e2877c9edad711445480fe2337 00:22:46.841 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:46.841 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5Tq 00:22:46.841 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c20a3427ee592af1c98572e2877c9edad711445480fe2337 0 00:22:46.841 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c20a3427ee592af1c98572e2877c9edad711445480fe2337 0 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c20a3427ee592af1c98572e2877c9edad711445480fe2337 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5Tq 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5Tq 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.5Tq 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9f2aedf849446de74c31d6085c974e7f 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.dsZ 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9f2aedf849446de74c31d6085c974e7f 1 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9f2aedf849446de74c31d6085c974e7f 1 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9f2aedf849446de74c31d6085c974e7f 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.dsZ 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.dsZ 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.dsZ 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=37e7c8b7a0a929bbb57cb77588f71d0cf987354a246e784c 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.dYp 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 37e7c8b7a0a929bbb57cb77588f71d0cf987354a246e784c 2 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 37e7c8b7a0a929bbb57cb77588f71d0cf987354a246e784c 2 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=37e7c8b7a0a929bbb57cb77588f71d0cf987354a246e784c 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.dYp 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.dYp 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.dYp 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8ddbc30152e18e045aa942e3e0aab569812a0e53ade82b24e32fccf084cf51b4 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sRk 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8ddbc30152e18e045aa942e3e0aab569812a0e53ade82b24e32fccf084cf51b4 3 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8ddbc30152e18e045aa942e3e0aab569812a0e53ade82b24e32fccf084cf51b4 3 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8ddbc30152e18e045aa942e3e0aab569812a0e53ade82b24e32fccf084cf51b4 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sRk 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sRk 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.sRk 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 94604 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 94604 ']' 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:46.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.842 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.100 13:40:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 94648 /var/tmp/host.sock 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 94648 ']' 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:47.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:47.358 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5Tq 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5Tq 00:22:47.616 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5Tq 00:22:47.874 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:22:47.874 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dsZ 00:22:47.874 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.874 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.874 13:41:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.874 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dsZ 00:22:47.875 13:41:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dsZ 00:22:48.132 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:22:48.132 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.dYp 00:22:48.132 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.132 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.132 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.132 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.dYp 00:22:48.132 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.dYp 00:22:48.390 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:22:48.390 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.sRk 00:22:48.390 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.390 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.390 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.390 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.sRk 00:22:48.390 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.sRk 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.957 13:41:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.957 13:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.957 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:48.957 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:49.522 00:22:49.522 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:49.522 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.523 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:49.781 { 00:22:49.781 "auth": { 00:22:49.781 "dhgroup": "null", 00:22:49.781 "digest": "sha256", 00:22:49.781 "state": "completed" 00:22:49.781 }, 00:22:49.781 "cntlid": 1, 00:22:49.781 "listen_address": { 00:22:49.781 "adrfam": "IPv4", 00:22:49.781 "traddr": "10.0.0.2", 00:22:49.781 "trsvcid": "4420", 00:22:49.781 "trtype": "TCP" 00:22:49.781 }, 00:22:49.781 "peer_address": { 00:22:49.781 "adrfam": "IPv4", 00:22:49.781 "traddr": "10.0.0.1", 00:22:49.781 "trsvcid": "39780", 00:22:49.781 "trtype": "TCP" 00:22:49.781 }, 00:22:49.781 "qid": 0, 00:22:49.781 "state": "enabled" 00:22:49.781 } 00:22:49.781 ]' 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.781 13:41:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.347 13:41:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:22:54.530 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.788 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.788 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:54.789 13:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.789 13:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.789 13:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.789 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:54.789 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:54.789 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.047 13:41:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.047 13:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.047 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:55.047 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:22:55.304 00:22:55.304 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:55.304 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:55.304 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.571 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.571 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.571 13:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.571 13:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.833 13:41:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.833 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:55.833 { 00:22:55.833 "auth": { 00:22:55.833 "dhgroup": "null", 00:22:55.833 "digest": "sha256", 00:22:55.833 "state": "completed" 00:22:55.833 }, 00:22:55.833 "cntlid": 3, 00:22:55.833 "listen_address": { 00:22:55.833 "adrfam": "IPv4", 00:22:55.833 "traddr": "10.0.0.2", 00:22:55.833 "trsvcid": "4420", 00:22:55.833 "trtype": "TCP" 00:22:55.833 }, 00:22:55.833 "peer_address": { 00:22:55.833 "adrfam": "IPv4", 00:22:55.833 "traddr": "10.0.0.1", 00:22:55.833 "trsvcid": "39806", 00:22:55.833 "trtype": "TCP" 00:22:55.833 }, 00:22:55.833 "qid": 0, 00:22:55.833 "state": "enabled" 00:22:55.833 } 00:22:55.834 ]' 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.834 13:41:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.091 13:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:57.025 13:41:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.025 13:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.283 13:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.283 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:57.283 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:57.541 00:22:57.541 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:57.541 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.541 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:22:57.800 { 00:22:57.800 "auth": { 00:22:57.800 "dhgroup": "null", 00:22:57.800 "digest": "sha256", 00:22:57.800 "state": "completed" 00:22:57.800 }, 00:22:57.800 "cntlid": 5, 00:22:57.800 "listen_address": { 00:22:57.800 "adrfam": "IPv4", 00:22:57.800 "traddr": "10.0.0.2", 00:22:57.800 "trsvcid": "4420", 00:22:57.800 "trtype": "TCP" 00:22:57.800 }, 00:22:57.800 "peer_address": { 00:22:57.800 "adrfam": "IPv4", 00:22:57.800 "traddr": "10.0.0.1", 00:22:57.800 "trsvcid": "34426", 00:22:57.800 "trtype": "TCP" 00:22:57.800 }, 00:22:57.800 "qid": 0, 00:22:57.800 "state": "enabled" 00:22:57.800 } 00:22:57.800 ]' 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:22:57.800 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:22:58.059 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.059 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.059 13:41:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:58.317 13:41:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:59.251 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.509 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:59.509 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:59.509 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:59.767 00:22:59.767 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:22:59.767 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.767 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:00.072 { 00:23:00.072 "auth": { 00:23:00.072 "dhgroup": "null", 00:23:00.072 "digest": "sha256", 00:23:00.072 "state": "completed" 00:23:00.072 }, 00:23:00.072 "cntlid": 7, 00:23:00.072 "listen_address": { 00:23:00.072 "adrfam": "IPv4", 00:23:00.072 "traddr": "10.0.0.2", 00:23:00.072 "trsvcid": "4420", 00:23:00.072 "trtype": "TCP" 00:23:00.072 }, 00:23:00.072 "peer_address": { 00:23:00.072 "adrfam": "IPv4", 00:23:00.072 "traddr": "10.0.0.1", 00:23:00.072 "trsvcid": "34434", 00:23:00.072 "trtype": "TCP" 00:23:00.072 }, 00:23:00.072 "qid": 0, 00:23:00.072 "state": "enabled" 00:23:00.072 } 00:23:00.072 ]' 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:00.072 13:41:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:00.072 13:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:23:00.072 13:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:00.072 13:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.072 13:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.072 13:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.329 13:41:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:01.261 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:01.519 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:23:01.519 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:01.519 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:01.520 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:01.778 00:23:01.778 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:01.778 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.778 13:41:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:02.343 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.343 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.343 13:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.343 13:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.343 13:41:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.343 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:02.343 { 00:23:02.343 "auth": { 00:23:02.343 "dhgroup": "ffdhe2048", 00:23:02.343 "digest": "sha256", 00:23:02.343 "state": "completed" 00:23:02.343 }, 00:23:02.343 "cntlid": 9, 00:23:02.343 "listen_address": { 00:23:02.343 "adrfam": "IPv4", 00:23:02.343 "traddr": "10.0.0.2", 00:23:02.343 "trsvcid": "4420", 00:23:02.344 "trtype": "TCP" 00:23:02.344 }, 00:23:02.344 "peer_address": { 00:23:02.344 "adrfam": "IPv4", 00:23:02.344 "traddr": "10.0.0.1", 00:23:02.344 "trsvcid": "34466", 00:23:02.344 "trtype": "TCP" 00:23:02.344 }, 00:23:02.344 "qid": 0, 00:23:02.344 "state": "enabled" 00:23:02.344 } 00:23:02.344 ]' 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.344 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.601 13:41:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:23:03.533 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.533 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:03.533 13:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.533 13:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.533 13:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.534 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:03.534 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.534 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:03.791 13:41:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:04.356 00:23:04.356 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:04.356 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:04.356 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.356 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:04.615 { 00:23:04.615 "auth": { 00:23:04.615 "dhgroup": "ffdhe2048", 00:23:04.615 "digest": "sha256", 00:23:04.615 "state": "completed" 00:23:04.615 }, 00:23:04.615 "cntlid": 11, 00:23:04.615 "listen_address": { 00:23:04.615 "adrfam": "IPv4", 00:23:04.615 "traddr": "10.0.0.2", 00:23:04.615 "trsvcid": "4420", 00:23:04.615 "trtype": "TCP" 00:23:04.615 }, 00:23:04.615 "peer_address": { 00:23:04.615 "adrfam": "IPv4", 00:23:04.615 "traddr": "10.0.0.1", 00:23:04.615 "trsvcid": "34498", 00:23:04.615 "trtype": "TCP" 00:23:04.615 }, 00:23:04.615 "qid": 0, 00:23:04.615 "state": "enabled" 00:23:04.615 } 00:23:04.615 ]' 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.615 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.873 13:41:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:05.808 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:06.067 13:41:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:06.325 00:23:06.325 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:06.325 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:06.325 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:06.644 { 00:23:06.644 "auth": { 00:23:06.644 "dhgroup": "ffdhe2048", 00:23:06.644 "digest": "sha256", 00:23:06.644 "state": "completed" 00:23:06.644 }, 00:23:06.644 "cntlid": 13, 00:23:06.644 "listen_address": { 00:23:06.644 "adrfam": "IPv4", 00:23:06.644 "traddr": "10.0.0.2", 00:23:06.644 "trsvcid": "4420", 00:23:06.644 "trtype": "TCP" 00:23:06.644 }, 00:23:06.644 "peer_address": { 00:23:06.644 "adrfam": "IPv4", 00:23:06.644 "traddr": "10.0.0.1", 00:23:06.644 "trsvcid": "40134", 00:23:06.644 "trtype": "TCP" 00:23:06.644 }, 00:23:06.644 "qid": 0, 00:23:06.644 "state": "enabled" 00:23:06.644 } 00:23:06.644 ]' 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.644 13:41:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:07.210 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:07.777 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:08.035 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:23:08.035 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:08.035 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.036 13:41:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:08.295 00:23:08.295 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:08.295 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.295 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:08.553 { 00:23:08.553 "auth": { 00:23:08.553 "dhgroup": "ffdhe2048", 00:23:08.553 "digest": "sha256", 00:23:08.553 "state": "completed" 00:23:08.553 }, 00:23:08.553 "cntlid": 15, 00:23:08.553 "listen_address": { 00:23:08.553 "adrfam": "IPv4", 00:23:08.553 "traddr": "10.0.0.2", 00:23:08.553 "trsvcid": "4420", 00:23:08.553 "trtype": "TCP" 00:23:08.553 }, 00:23:08.553 "peer_address": { 00:23:08.553 "adrfam": "IPv4", 00:23:08.553 "traddr": "10.0.0.1", 00:23:08.553 "trsvcid": "40166", 00:23:08.553 "trtype": "TCP" 00:23:08.553 }, 00:23:08.553 "qid": 0, 00:23:08.553 "state": "enabled" 00:23:08.553 } 00:23:08.553 ]' 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:08.553 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:08.811 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.811 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.811 13:41:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:09.069 13:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:10.019 13:41:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:10.019 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:10.585 00:23:10.585 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:10.586 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:10.586 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:10.844 { 00:23:10.844 "auth": { 00:23:10.844 "dhgroup": "ffdhe3072", 00:23:10.844 "digest": "sha256", 00:23:10.844 "state": "completed" 00:23:10.844 }, 00:23:10.844 "cntlid": 17, 00:23:10.844 "listen_address": { 00:23:10.844 "adrfam": "IPv4", 00:23:10.844 "traddr": "10.0.0.2", 00:23:10.844 "trsvcid": "4420", 00:23:10.844 "trtype": "TCP" 00:23:10.844 }, 00:23:10.844 "peer_address": { 00:23:10.844 "adrfam": "IPv4", 00:23:10.844 "traddr": "10.0.0.1", 00:23:10.844 "trsvcid": "40192", 00:23:10.844 "trtype": "TCP" 00:23:10.844 }, 00:23:10.844 "qid": 0, 00:23:10.844 "state": "enabled" 00:23:10.844 } 00:23:10.844 ]' 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.844 13:41:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.102 13:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.035 13:41:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:12.293 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:12.551 00:23:12.551 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:12.551 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.551 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:12.808 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.808 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.808 13:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.808 13:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.808 13:41:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.808 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:12.808 { 00:23:12.808 "auth": { 00:23:12.808 "dhgroup": "ffdhe3072", 00:23:12.808 "digest": "sha256", 00:23:12.808 "state": "completed" 00:23:12.808 }, 00:23:12.808 "cntlid": 19, 00:23:12.808 "listen_address": { 00:23:12.808 "adrfam": "IPv4", 00:23:12.808 "traddr": "10.0.0.2", 00:23:12.808 "trsvcid": "4420", 00:23:12.808 "trtype": "TCP" 00:23:12.808 }, 00:23:12.808 "peer_address": { 00:23:12.808 "adrfam": "IPv4", 00:23:12.808 "traddr": "10.0.0.1", 00:23:12.808 "trsvcid": "40220", 00:23:12.809 "trtype": "TCP" 00:23:12.809 }, 00:23:12.809 "qid": 0, 00:23:12.809 "state": "enabled" 00:23:12.809 } 00:23:12.809 ]' 00:23:12.809 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:12.809 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:12.809 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:12.809 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:12.809 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:13.066 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.066 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.066 13:41:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.323 13:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:13.893 13:41:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:14.458 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:23:14.458 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:14.459 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:14.716 00:23:14.716 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:14.716 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:14.716 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.975 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.975 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.975 13:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.975 13:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.975 13:41:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.975 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:14.975 { 00:23:14.975 "auth": { 00:23:14.975 "dhgroup": "ffdhe3072", 00:23:14.975 "digest": "sha256", 00:23:14.975 "state": "completed" 00:23:14.975 }, 00:23:14.975 "cntlid": 21, 00:23:14.975 "listen_address": { 00:23:14.975 "adrfam": "IPv4", 00:23:14.975 "traddr": "10.0.0.2", 00:23:14.975 "trsvcid": "4420", 00:23:14.975 "trtype": "TCP" 00:23:14.975 }, 00:23:14.975 "peer_address": { 00:23:14.975 "adrfam": "IPv4", 00:23:14.975 "traddr": "10.0.0.1", 00:23:14.975 "trsvcid": "40240", 00:23:14.975 "trtype": "TCP" 00:23:14.975 }, 00:23:14.975 "qid": 0, 00:23:14.975 "state": "enabled" 00:23:14.975 } 00:23:14.975 ]' 00:23:14.975 13:41:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:14.975 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:14.975 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:14.975 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:14.975 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:15.233 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:15.233 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.233 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.492 13:41:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.058 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.316 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.317 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.317 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:16.894 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:16.894 { 00:23:16.894 "auth": { 00:23:16.894 "dhgroup": "ffdhe3072", 00:23:16.894 "digest": "sha256", 00:23:16.894 "state": "completed" 00:23:16.894 }, 00:23:16.894 "cntlid": 23, 00:23:16.894 "listen_address": { 00:23:16.894 "adrfam": "IPv4", 00:23:16.894 "traddr": "10.0.0.2", 00:23:16.894 "trsvcid": "4420", 00:23:16.894 "trtype": "TCP" 00:23:16.894 }, 00:23:16.894 "peer_address": { 00:23:16.894 "adrfam": "IPv4", 00:23:16.894 "traddr": "10.0.0.1", 00:23:16.894 "trsvcid": "33334", 00:23:16.894 "trtype": "TCP" 00:23:16.894 }, 00:23:16.894 "qid": 0, 00:23:16.894 "state": "enabled" 00:23:16.894 } 00:23:16.894 ]' 00:23:16.894 13:41:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:17.151 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:17.151 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:17.151 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:17.151 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:17.151 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.151 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.151 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.408 13:41:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:18.341 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:18.906 00:23:18.906 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:18.906 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:18.906 13:41:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:19.165 { 00:23:19.165 "auth": { 00:23:19.165 "dhgroup": "ffdhe4096", 00:23:19.165 "digest": "sha256", 00:23:19.165 "state": "completed" 00:23:19.165 }, 00:23:19.165 "cntlid": 25, 00:23:19.165 "listen_address": { 00:23:19.165 "adrfam": "IPv4", 00:23:19.165 "traddr": "10.0.0.2", 00:23:19.165 "trsvcid": "4420", 00:23:19.165 "trtype": "TCP" 00:23:19.165 }, 00:23:19.165 "peer_address": { 00:23:19.165 "adrfam": "IPv4", 00:23:19.165 "traddr": "10.0.0.1", 00:23:19.165 "trsvcid": "33364", 00:23:19.165 "trtype": "TCP" 00:23:19.165 }, 00:23:19.165 "qid": 0, 00:23:19.165 "state": "enabled" 00:23:19.165 } 00:23:19.165 ]' 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.165 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.422 13:41:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:20.402 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:20.967 00:23:20.967 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:20.967 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:20.967 13:41:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.224 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:21.225 { 00:23:21.225 "auth": { 00:23:21.225 "dhgroup": "ffdhe4096", 00:23:21.225 "digest": "sha256", 00:23:21.225 "state": "completed" 00:23:21.225 }, 00:23:21.225 "cntlid": 27, 00:23:21.225 "listen_address": { 00:23:21.225 "adrfam": "IPv4", 00:23:21.225 "traddr": "10.0.0.2", 00:23:21.225 "trsvcid": "4420", 00:23:21.225 "trtype": "TCP" 00:23:21.225 }, 00:23:21.225 "peer_address": { 00:23:21.225 "adrfam": "IPv4", 00:23:21.225 "traddr": "10.0.0.1", 00:23:21.225 "trsvcid": "33390", 00:23:21.225 "trtype": "TCP" 00:23:21.225 }, 00:23:21.225 "qid": 0, 00:23:21.225 "state": "enabled" 00:23:21.225 } 00:23:21.225 ]' 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.225 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.791 13:41:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:22.358 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:22.617 13:41:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:23.183 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:23.183 { 00:23:23.183 "auth": { 00:23:23.183 "dhgroup": "ffdhe4096", 00:23:23.183 "digest": "sha256", 00:23:23.183 "state": "completed" 00:23:23.183 }, 00:23:23.183 "cntlid": 29, 00:23:23.183 "listen_address": { 00:23:23.183 "adrfam": "IPv4", 00:23:23.183 "traddr": "10.0.0.2", 00:23:23.183 "trsvcid": "4420", 00:23:23.183 "trtype": "TCP" 00:23:23.183 }, 00:23:23.183 "peer_address": { 00:23:23.183 "adrfam": "IPv4", 00:23:23.183 "traddr": "10.0.0.1", 00:23:23.183 "trsvcid": "33420", 00:23:23.183 "trtype": "TCP" 00:23:23.183 }, 00:23:23.183 "qid": 0, 00:23:23.183 "state": "enabled" 00:23:23.183 } 00:23:23.183 ]' 00:23:23.183 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:23.453 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:23.453 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:23.453 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:23.453 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:23.453 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:23.453 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:23.453 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:23.742 13:41:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:24.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:24.677 13:41:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:25.243 00:23:25.243 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:25.243 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:25.243 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:25.501 { 00:23:25.501 "auth": { 00:23:25.501 "dhgroup": "ffdhe4096", 00:23:25.501 "digest": "sha256", 00:23:25.501 "state": "completed" 00:23:25.501 }, 00:23:25.501 "cntlid": 31, 00:23:25.501 "listen_address": { 00:23:25.501 "adrfam": "IPv4", 00:23:25.501 "traddr": "10.0.0.2", 00:23:25.501 "trsvcid": "4420", 00:23:25.501 "trtype": "TCP" 00:23:25.501 }, 00:23:25.501 "peer_address": { 00:23:25.501 "adrfam": "IPv4", 00:23:25.501 "traddr": "10.0.0.1", 00:23:25.501 "trsvcid": "33432", 00:23:25.501 "trtype": "TCP" 00:23:25.501 }, 00:23:25.501 "qid": 0, 00:23:25.501 "state": "enabled" 00:23:25.501 } 00:23:25.501 ]' 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:25.501 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:25.502 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:25.760 13:41:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:26.694 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:26.952 13:41:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:27.240 00:23:27.240 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:27.240 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.241 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:27.500 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.500 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.500 13:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.500 13:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.500 13:41:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.500 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:27.500 { 00:23:27.500 "auth": { 00:23:27.500 "dhgroup": "ffdhe6144", 00:23:27.500 "digest": "sha256", 00:23:27.500 "state": "completed" 00:23:27.500 }, 00:23:27.500 "cntlid": 33, 00:23:27.500 "listen_address": { 00:23:27.500 "adrfam": "IPv4", 00:23:27.500 "traddr": "10.0.0.2", 00:23:27.500 "trsvcid": "4420", 00:23:27.500 "trtype": "TCP" 00:23:27.500 }, 00:23:27.500 "peer_address": { 00:23:27.500 "adrfam": "IPv4", 00:23:27.500 "traddr": "10.0.0.1", 00:23:27.500 "trsvcid": "42092", 00:23:27.500 "trtype": "TCP" 00:23:27.500 }, 00:23:27.500 "qid": 0, 00:23:27.500 "state": "enabled" 00:23:27.500 } 00:23:27.500 ]' 00:23:27.500 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:27.758 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:27.758 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:27.758 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:27.758 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:27.758 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.758 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.758 13:41:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.017 13:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:28.951 13:41:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:29.210 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:29.775 00:23:29.775 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:29.775 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.775 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:30.034 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.034 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:30.034 13:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.034 13:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.034 13:41:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.034 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:30.034 { 00:23:30.034 "auth": { 00:23:30.034 "dhgroup": "ffdhe6144", 00:23:30.034 "digest": "sha256", 00:23:30.034 "state": "completed" 00:23:30.034 }, 00:23:30.034 "cntlid": 35, 00:23:30.034 "listen_address": { 00:23:30.034 "adrfam": "IPv4", 00:23:30.034 "traddr": "10.0.0.2", 00:23:30.034 "trsvcid": "4420", 00:23:30.034 "trtype": "TCP" 00:23:30.034 }, 00:23:30.034 "peer_address": { 00:23:30.034 "adrfam": "IPv4", 00:23:30.034 "traddr": "10.0.0.1", 00:23:30.034 "trsvcid": "42132", 00:23:30.034 "trtype": "TCP" 00:23:30.034 }, 00:23:30.034 "qid": 0, 00:23:30.034 "state": "enabled" 00:23:30.034 } 00:23:30.034 ]' 00:23:30.034 13:41:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:30.034 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:30.034 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:30.034 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:30.034 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:30.034 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:30.034 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:30.034 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:30.599 13:41:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:31.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:31.166 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:31.425 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:31.992 00:23:31.992 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:31.992 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:31.992 13:41:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:32.251 { 00:23:32.251 "auth": { 00:23:32.251 "dhgroup": "ffdhe6144", 00:23:32.251 "digest": "sha256", 00:23:32.251 "state": "completed" 00:23:32.251 }, 00:23:32.251 "cntlid": 37, 00:23:32.251 "listen_address": { 00:23:32.251 "adrfam": "IPv4", 00:23:32.251 "traddr": "10.0.0.2", 00:23:32.251 "trsvcid": "4420", 00:23:32.251 "trtype": "TCP" 00:23:32.251 }, 00:23:32.251 "peer_address": { 00:23:32.251 "adrfam": "IPv4", 00:23:32.251 "traddr": "10.0.0.1", 00:23:32.251 "trsvcid": "42156", 00:23:32.251 "trtype": "TCP" 00:23:32.251 }, 00:23:32.251 "qid": 0, 00:23:32.251 "state": "enabled" 00:23:32.251 } 00:23:32.251 ]' 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:32.251 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:32.510 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.510 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.510 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:32.768 13:41:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:23:33.334 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.592 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:33.592 13:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.592 13:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.592 13:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.592 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:33.592 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.592 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:33.850 13:41:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:34.108 00:23:34.108 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:34.108 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:34.108 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:34.675 { 00:23:34.675 "auth": { 00:23:34.675 "dhgroup": "ffdhe6144", 00:23:34.675 "digest": "sha256", 00:23:34.675 "state": "completed" 00:23:34.675 }, 00:23:34.675 "cntlid": 39, 00:23:34.675 "listen_address": { 00:23:34.675 "adrfam": "IPv4", 00:23:34.675 "traddr": "10.0.0.2", 00:23:34.675 "trsvcid": "4420", 00:23:34.675 "trtype": "TCP" 00:23:34.675 }, 00:23:34.675 "peer_address": { 00:23:34.675 "adrfam": "IPv4", 00:23:34.675 "traddr": "10.0.0.1", 00:23:34.675 "trsvcid": "42182", 00:23:34.675 "trtype": "TCP" 00:23:34.675 }, 00:23:34.675 "qid": 0, 00:23:34.675 "state": "enabled" 00:23:34.675 } 00:23:34.675 ]' 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.675 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.933 13:41:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:35.867 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:36.125 13:41:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:36.691 00:23:36.691 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:36.691 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:36.691 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.949 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.949 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.949 13:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.949 13:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.949 13:41:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.949 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:36.949 { 00:23:36.949 "auth": { 00:23:36.949 "dhgroup": "ffdhe8192", 00:23:36.950 "digest": "sha256", 00:23:36.950 "state": "completed" 00:23:36.950 }, 00:23:36.950 "cntlid": 41, 00:23:36.950 "listen_address": { 00:23:36.950 "adrfam": "IPv4", 00:23:36.950 "traddr": "10.0.0.2", 00:23:36.950 "trsvcid": "4420", 00:23:36.950 "trtype": "TCP" 00:23:36.950 }, 00:23:36.950 "peer_address": { 00:23:36.950 "adrfam": "IPv4", 00:23:36.950 "traddr": "10.0.0.1", 00:23:36.950 "trsvcid": "51620", 00:23:36.950 "trtype": "TCP" 00:23:36.950 }, 00:23:36.950 "qid": 0, 00:23:36.950 "state": "enabled" 00:23:36.950 } 00:23:36.950 ]' 00:23:36.950 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:36.950 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:36.950 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:36.950 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:36.950 13:41:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:36.950 13:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.950 13:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.950 13:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:37.517 13:41:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:38.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:38.081 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:38.339 13:41:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:39.275 00:23:39.275 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:39.275 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:39.275 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:39.533 { 00:23:39.533 "auth": { 00:23:39.533 "dhgroup": "ffdhe8192", 00:23:39.533 "digest": "sha256", 00:23:39.533 "state": "completed" 00:23:39.533 }, 00:23:39.533 "cntlid": 43, 00:23:39.533 "listen_address": { 00:23:39.533 "adrfam": "IPv4", 00:23:39.533 "traddr": "10.0.0.2", 00:23:39.533 "trsvcid": "4420", 00:23:39.533 "trtype": "TCP" 00:23:39.533 }, 00:23:39.533 "peer_address": { 00:23:39.533 "adrfam": "IPv4", 00:23:39.533 "traddr": "10.0.0.1", 00:23:39.533 "trsvcid": "51646", 00:23:39.533 "trtype": "TCP" 00:23:39.533 }, 00:23:39.533 "qid": 0, 00:23:39.533 "state": "enabled" 00:23:39.533 } 00:23:39.533 ]' 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.533 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.099 13:41:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:40.704 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:40.961 13:41:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:41.894 00:23:41.894 13:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:41.894 13:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.894 13:41:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:42.152 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:42.153 { 00:23:42.153 "auth": { 00:23:42.153 "dhgroup": "ffdhe8192", 00:23:42.153 "digest": "sha256", 00:23:42.153 "state": "completed" 00:23:42.153 }, 00:23:42.153 "cntlid": 45, 00:23:42.153 "listen_address": { 00:23:42.153 "adrfam": "IPv4", 00:23:42.153 "traddr": "10.0.0.2", 00:23:42.153 "trsvcid": "4420", 00:23:42.153 "trtype": "TCP" 00:23:42.153 }, 00:23:42.153 "peer_address": { 00:23:42.153 "adrfam": "IPv4", 00:23:42.153 "traddr": "10.0.0.1", 00:23:42.153 "trsvcid": "51660", 00:23:42.153 "trtype": "TCP" 00:23:42.153 }, 00:23:42.153 "qid": 0, 00:23:42.153 "state": "enabled" 00:23:42.153 } 00:23:42.153 ]' 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.153 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.411 13:41:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:43.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:43.346 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:43.604 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:23:43.604 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:43.604 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:23:43.604 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:23:43.605 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:43.605 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:23:43.605 13:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.605 13:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.605 13:41:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.605 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:43.605 13:41:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:44.171 00:23:44.171 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:44.171 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:44.171 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:44.428 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.428 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:44.428 13:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.428 13:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.428 13:41:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.428 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:44.428 { 00:23:44.428 "auth": { 00:23:44.428 "dhgroup": "ffdhe8192", 00:23:44.428 "digest": "sha256", 00:23:44.428 "state": "completed" 00:23:44.428 }, 00:23:44.428 "cntlid": 47, 00:23:44.428 "listen_address": { 00:23:44.428 "adrfam": "IPv4", 00:23:44.428 "traddr": "10.0.0.2", 00:23:44.428 "trsvcid": "4420", 00:23:44.428 "trtype": "TCP" 00:23:44.428 }, 00:23:44.428 "peer_address": { 00:23:44.429 "adrfam": "IPv4", 00:23:44.429 "traddr": "10.0.0.1", 00:23:44.429 "trsvcid": "51682", 00:23:44.429 "trtype": "TCP" 00:23:44.429 }, 00:23:44.429 "qid": 0, 00:23:44.429 "state": "enabled" 00:23:44.429 } 00:23:44.429 ]' 00:23:44.429 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:44.429 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:23:44.429 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:44.429 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:44.429 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:44.686 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:44.686 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:44.686 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:44.944 13:41:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:45.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:45.510 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:46.077 13:41:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:46.333 00:23:46.333 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:46.333 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:46.333 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:46.592 { 00:23:46.592 "auth": { 00:23:46.592 "dhgroup": "null", 00:23:46.592 "digest": "sha384", 00:23:46.592 "state": "completed" 00:23:46.592 }, 00:23:46.592 "cntlid": 49, 00:23:46.592 "listen_address": { 00:23:46.592 "adrfam": "IPv4", 00:23:46.592 "traddr": "10.0.0.2", 00:23:46.592 "trsvcid": "4420", 00:23:46.592 "trtype": "TCP" 00:23:46.592 }, 00:23:46.592 "peer_address": { 00:23:46.592 "adrfam": "IPv4", 00:23:46.592 "traddr": "10.0.0.1", 00:23:46.592 "trsvcid": "40630", 00:23:46.592 "trtype": "TCP" 00:23:46.592 }, 00:23:46.592 "qid": 0, 00:23:46.592 "state": "enabled" 00:23:46.592 } 00:23:46.592 ]' 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:46.592 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:46.850 13:41:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:47.783 13:42:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:48.347 00:23:48.347 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:48.347 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.347 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:48.604 { 00:23:48.604 "auth": { 00:23:48.604 "dhgroup": "null", 00:23:48.604 "digest": "sha384", 00:23:48.604 "state": "completed" 00:23:48.604 }, 00:23:48.604 "cntlid": 51, 00:23:48.604 "listen_address": { 00:23:48.604 "adrfam": "IPv4", 00:23:48.604 "traddr": "10.0.0.2", 00:23:48.604 "trsvcid": "4420", 00:23:48.604 "trtype": "TCP" 00:23:48.604 }, 00:23:48.604 "peer_address": { 00:23:48.604 "adrfam": "IPv4", 00:23:48.604 "traddr": "10.0.0.1", 00:23:48.604 "trsvcid": "40664", 00:23:48.604 "trtype": "TCP" 00:23:48.604 }, 00:23:48.604 "qid": 0, 00:23:48.604 "state": "enabled" 00:23:48.604 } 00:23:48.604 ]' 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:48.604 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:48.865 13:42:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:49.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:49.567 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:50.131 13:42:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:50.388 00:23:50.388 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:50.388 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:50.388 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:50.646 { 00:23:50.646 "auth": { 00:23:50.646 "dhgroup": "null", 00:23:50.646 "digest": "sha384", 00:23:50.646 "state": "completed" 00:23:50.646 }, 00:23:50.646 "cntlid": 53, 00:23:50.646 "listen_address": { 00:23:50.646 "adrfam": "IPv4", 00:23:50.646 "traddr": "10.0.0.2", 00:23:50.646 "trsvcid": "4420", 00:23:50.646 "trtype": "TCP" 00:23:50.646 }, 00:23:50.646 "peer_address": { 00:23:50.646 "adrfam": "IPv4", 00:23:50.646 "traddr": "10.0.0.1", 00:23:50.646 "trsvcid": "40688", 00:23:50.646 "trtype": "TCP" 00:23:50.646 }, 00:23:50.646 "qid": 0, 00:23:50.646 "state": "enabled" 00:23:50.646 } 00:23:50.646 ]' 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:50.646 13:42:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.211 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:51.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:51.776 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:52.034 13:42:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:23:52.291 00:23:52.291 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:52.291 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:52.291 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:52.549 { 00:23:52.549 "auth": { 00:23:52.549 "dhgroup": "null", 00:23:52.549 "digest": "sha384", 00:23:52.549 "state": "completed" 00:23:52.549 }, 00:23:52.549 "cntlid": 55, 00:23:52.549 "listen_address": { 00:23:52.549 "adrfam": "IPv4", 00:23:52.549 "traddr": "10.0.0.2", 00:23:52.549 "trsvcid": "4420", 00:23:52.549 "trtype": "TCP" 00:23:52.549 }, 00:23:52.549 "peer_address": { 00:23:52.549 "adrfam": "IPv4", 00:23:52.549 "traddr": "10.0.0.1", 00:23:52.549 "trsvcid": "40720", 00:23:52.549 "trtype": "TCP" 00:23:52.549 }, 00:23:52.549 "qid": 0, 00:23:52.549 "state": "enabled" 00:23:52.549 } 00:23:52.549 ]' 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:52.549 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:52.806 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:23:52.806 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:52.806 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:52.806 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:52.806 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:53.062 13:42:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:53.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:53.625 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:53.882 13:42:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:23:54.447 00:23:54.447 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:54.447 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:54.447 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:54.705 { 00:23:54.705 "auth": { 00:23:54.705 "dhgroup": "ffdhe2048", 00:23:54.705 "digest": "sha384", 00:23:54.705 "state": "completed" 00:23:54.705 }, 00:23:54.705 "cntlid": 57, 00:23:54.705 "listen_address": { 00:23:54.705 "adrfam": "IPv4", 00:23:54.705 "traddr": "10.0.0.2", 00:23:54.705 "trsvcid": "4420", 00:23:54.705 "trtype": "TCP" 00:23:54.705 }, 00:23:54.705 "peer_address": { 00:23:54.705 "adrfam": "IPv4", 00:23:54.705 "traddr": "10.0.0.1", 00:23:54.705 "trsvcid": "40734", 00:23:54.705 "trtype": "TCP" 00:23:54.705 }, 00:23:54.705 "qid": 0, 00:23:54.705 "state": "enabled" 00:23:54.705 } 00:23:54.705 ]' 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:54.705 13:42:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:54.963 13:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:55.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:55.895 13:42:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.152 13:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.153 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:56.153 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:23:56.410 00:23:56.410 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:56.410 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.410 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:56.977 { 00:23:56.977 "auth": { 00:23:56.977 "dhgroup": "ffdhe2048", 00:23:56.977 "digest": "sha384", 00:23:56.977 "state": "completed" 00:23:56.977 }, 00:23:56.977 "cntlid": 59, 00:23:56.977 "listen_address": { 00:23:56.977 "adrfam": "IPv4", 00:23:56.977 "traddr": "10.0.0.2", 00:23:56.977 "trsvcid": "4420", 00:23:56.977 "trtype": "TCP" 00:23:56.977 }, 00:23:56.977 "peer_address": { 00:23:56.977 "adrfam": "IPv4", 00:23:56.977 "traddr": "10.0.0.1", 00:23:56.977 "trsvcid": "46046", 00:23:56.977 "trtype": "TCP" 00:23:56.977 }, 00:23:56.977 "qid": 0, 00:23:56.977 "state": "enabled" 00:23:56.977 } 00:23:56.977 ]' 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.977 13:42:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:57.236 13:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:58.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.171 13:42:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:58.429 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:23:58.688 00:23:58.688 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:23:58.688 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:23:58.688 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:23:58.946 { 00:23:58.946 "auth": { 00:23:58.946 "dhgroup": "ffdhe2048", 00:23:58.946 "digest": "sha384", 00:23:58.946 "state": "completed" 00:23:58.946 }, 00:23:58.946 "cntlid": 61, 00:23:58.946 "listen_address": { 00:23:58.946 "adrfam": "IPv4", 00:23:58.946 "traddr": "10.0.0.2", 00:23:58.946 "trsvcid": "4420", 00:23:58.946 "trtype": "TCP" 00:23:58.946 }, 00:23:58.946 "peer_address": { 00:23:58.946 "adrfam": "IPv4", 00:23:58.946 "traddr": "10.0.0.1", 00:23:58.946 "trsvcid": "46066", 00:23:58.946 "trtype": "TCP" 00:23:58.946 }, 00:23:58.946 "qid": 0, 00:23:58.946 "state": "enabled" 00:23:58.946 } 00:23:58.946 ]' 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:58.946 13:42:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:23:58.946 13:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:58.946 13:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:23:59.205 13:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:59.205 13:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.205 13:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.463 13:42:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:00.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:00.028 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:00.286 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:24:00.286 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:00.286 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:00.286 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:00.286 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:00.286 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:24:00.286 13:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.577 13:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.577 13:42:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.577 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:00.577 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:00.836 00:24:00.836 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:00.836 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:00.836 13:42:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:01.094 { 00:24:01.094 "auth": { 00:24:01.094 "dhgroup": "ffdhe2048", 00:24:01.094 "digest": "sha384", 00:24:01.094 "state": "completed" 00:24:01.094 }, 00:24:01.094 "cntlid": 63, 00:24:01.094 "listen_address": { 00:24:01.094 "adrfam": "IPv4", 00:24:01.094 "traddr": "10.0.0.2", 00:24:01.094 "trsvcid": "4420", 00:24:01.094 "trtype": "TCP" 00:24:01.094 }, 00:24:01.094 "peer_address": { 00:24:01.094 "adrfam": "IPv4", 00:24:01.094 "traddr": "10.0.0.1", 00:24:01.094 "trsvcid": "46106", 00:24:01.094 "trtype": "TCP" 00:24:01.094 }, 00:24:01.094 "qid": 0, 00:24:01.094 "state": "enabled" 00:24:01.094 } 00:24:01.094 ]' 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:01.094 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:01.352 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:01.352 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.352 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.610 13:42:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:02.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:02.543 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:03.109 00:24:03.109 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:03.109 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:03.109 13:42:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:03.367 { 00:24:03.367 "auth": { 00:24:03.367 "dhgroup": "ffdhe3072", 00:24:03.367 "digest": "sha384", 00:24:03.367 "state": "completed" 00:24:03.367 }, 00:24:03.367 "cntlid": 65, 00:24:03.367 "listen_address": { 00:24:03.367 "adrfam": "IPv4", 00:24:03.367 "traddr": "10.0.0.2", 00:24:03.367 "trsvcid": "4420", 00:24:03.367 "trtype": "TCP" 00:24:03.367 }, 00:24:03.367 "peer_address": { 00:24:03.367 "adrfam": "IPv4", 00:24:03.367 "traddr": "10.0.0.1", 00:24:03.367 "trsvcid": "46134", 00:24:03.367 "trtype": "TCP" 00:24:03.367 }, 00:24:03.367 "qid": 0, 00:24:03.367 "state": "enabled" 00:24:03.367 } 00:24:03.367 ]' 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.367 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.625 13:42:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:24:04.556 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:04.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:04.557 13:42:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:05.122 00:24:05.122 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:05.122 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:05.122 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:05.380 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.380 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:05.381 { 00:24:05.381 "auth": { 00:24:05.381 "dhgroup": "ffdhe3072", 00:24:05.381 "digest": "sha384", 00:24:05.381 "state": "completed" 00:24:05.381 }, 00:24:05.381 "cntlid": 67, 00:24:05.381 "listen_address": { 00:24:05.381 "adrfam": "IPv4", 00:24:05.381 "traddr": "10.0.0.2", 00:24:05.381 "trsvcid": "4420", 00:24:05.381 "trtype": "TCP" 00:24:05.381 }, 00:24:05.381 "peer_address": { 00:24:05.381 "adrfam": "IPv4", 00:24:05.381 "traddr": "10.0.0.1", 00:24:05.381 "trsvcid": "46152", 00:24:05.381 "trtype": "TCP" 00:24:05.381 }, 00:24:05.381 "qid": 0, 00:24:05.381 "state": "enabled" 00:24:05.381 } 00:24:05.381 ]' 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:05.381 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:05.639 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:05.639 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:05.639 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:05.897 13:42:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:06.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.465 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:06.724 13:42:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:06.983 00:24:07.242 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:07.242 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:07.242 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:07.500 { 00:24:07.500 "auth": { 00:24:07.500 "dhgroup": "ffdhe3072", 00:24:07.500 "digest": "sha384", 00:24:07.500 "state": "completed" 00:24:07.500 }, 00:24:07.500 "cntlid": 69, 00:24:07.500 "listen_address": { 00:24:07.500 "adrfam": "IPv4", 00:24:07.500 "traddr": "10.0.0.2", 00:24:07.500 "trsvcid": "4420", 00:24:07.500 "trtype": "TCP" 00:24:07.500 }, 00:24:07.500 "peer_address": { 00:24:07.500 "adrfam": "IPv4", 00:24:07.500 "traddr": "10.0.0.1", 00:24:07.500 "trsvcid": "44550", 00:24:07.500 "trtype": "TCP" 00:24:07.500 }, 00:24:07.500 "qid": 0, 00:24:07.500 "state": "enabled" 00:24:07.500 } 00:24:07.500 ]' 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:07.500 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:07.758 13:42:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:08.323 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:08.889 13:42:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:09.147 00:24:09.147 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:09.147 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.147 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:09.405 { 00:24:09.405 "auth": { 00:24:09.405 "dhgroup": "ffdhe3072", 00:24:09.405 "digest": "sha384", 00:24:09.405 "state": "completed" 00:24:09.405 }, 00:24:09.405 "cntlid": 71, 00:24:09.405 "listen_address": { 00:24:09.405 "adrfam": "IPv4", 00:24:09.405 "traddr": "10.0.0.2", 00:24:09.405 "trsvcid": "4420", 00:24:09.405 "trtype": "TCP" 00:24:09.405 }, 00:24:09.405 "peer_address": { 00:24:09.405 "adrfam": "IPv4", 00:24:09.405 "traddr": "10.0.0.1", 00:24:09.405 "trsvcid": "44574", 00:24:09.405 "trtype": "TCP" 00:24:09.405 }, 00:24:09.405 "qid": 0, 00:24:09.405 "state": "enabled" 00:24:09.405 } 00:24:09.405 ]' 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:09.405 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:09.663 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:09.663 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:09.663 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:09.922 13:42:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:10.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:10.495 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:10.754 13:42:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:11.320 00:24:11.320 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:11.320 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:11.320 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:11.579 { 00:24:11.579 "auth": { 00:24:11.579 "dhgroup": "ffdhe4096", 00:24:11.579 "digest": "sha384", 00:24:11.579 "state": "completed" 00:24:11.579 }, 00:24:11.579 "cntlid": 73, 00:24:11.579 "listen_address": { 00:24:11.579 "adrfam": "IPv4", 00:24:11.579 "traddr": "10.0.0.2", 00:24:11.579 "trsvcid": "4420", 00:24:11.579 "trtype": "TCP" 00:24:11.579 }, 00:24:11.579 "peer_address": { 00:24:11.579 "adrfam": "IPv4", 00:24:11.579 "traddr": "10.0.0.1", 00:24:11.579 "trsvcid": "44598", 00:24:11.579 "trtype": "TCP" 00:24:11.579 }, 00:24:11.579 "qid": 0, 00:24:11.579 "state": "enabled" 00:24:11.579 } 00:24:11.579 ]' 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:11.579 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:11.837 13:42:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:12.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.769 13:42:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.029 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:13.029 13:42:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:13.287 00:24:13.287 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:13.287 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:13.287 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.545 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.545 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:13.545 13:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.545 13:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.545 13:42:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.545 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:13.545 { 00:24:13.545 "auth": { 00:24:13.545 "dhgroup": "ffdhe4096", 00:24:13.545 "digest": "sha384", 00:24:13.545 "state": "completed" 00:24:13.545 }, 00:24:13.545 "cntlid": 75, 00:24:13.545 "listen_address": { 00:24:13.545 "adrfam": "IPv4", 00:24:13.545 "traddr": "10.0.0.2", 00:24:13.545 "trsvcid": "4420", 00:24:13.545 "trtype": "TCP" 00:24:13.545 }, 00:24:13.545 "peer_address": { 00:24:13.545 "adrfam": "IPv4", 00:24:13.545 "traddr": "10.0.0.1", 00:24:13.545 "trsvcid": "44606", 00:24:13.545 "trtype": "TCP" 00:24:13.545 }, 00:24:13.545 "qid": 0, 00:24:13.545 "state": "enabled" 00:24:13.545 } 00:24:13.545 ]' 00:24:13.545 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:13.802 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:13.802 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:13.802 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:13.802 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:13.802 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:13.802 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:13.802 13:42:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:14.060 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:14.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.624 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:14.882 13:42:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:15.141 00:24:15.398 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:15.398 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.398 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:15.656 { 00:24:15.656 "auth": { 00:24:15.656 "dhgroup": "ffdhe4096", 00:24:15.656 "digest": "sha384", 00:24:15.656 "state": "completed" 00:24:15.656 }, 00:24:15.656 "cntlid": 77, 00:24:15.656 "listen_address": { 00:24:15.656 "adrfam": "IPv4", 00:24:15.656 "traddr": "10.0.0.2", 00:24:15.656 "trsvcid": "4420", 00:24:15.656 "trtype": "TCP" 00:24:15.656 }, 00:24:15.656 "peer_address": { 00:24:15.656 "adrfam": "IPv4", 00:24:15.656 "traddr": "10.0.0.1", 00:24:15.656 "trsvcid": "44640", 00:24:15.656 "trtype": "TCP" 00:24:15.656 }, 00:24:15.656 "qid": 0, 00:24:15.656 "state": "enabled" 00:24:15.656 } 00:24:15.656 ]' 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:15.656 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:15.914 13:42:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:16.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:16.849 13:42:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:17.432 00:24:17.432 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:17.432 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:17.432 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:17.689 { 00:24:17.689 "auth": { 00:24:17.689 "dhgroup": "ffdhe4096", 00:24:17.689 "digest": "sha384", 00:24:17.689 "state": "completed" 00:24:17.689 }, 00:24:17.689 "cntlid": 79, 00:24:17.689 "listen_address": { 00:24:17.689 "adrfam": "IPv4", 00:24:17.689 "traddr": "10.0.0.2", 00:24:17.689 "trsvcid": "4420", 00:24:17.689 "trtype": "TCP" 00:24:17.689 }, 00:24:17.689 "peer_address": { 00:24:17.689 "adrfam": "IPv4", 00:24:17.689 "traddr": "10.0.0.1", 00:24:17.689 "trsvcid": "59070", 00:24:17.689 "trtype": "TCP" 00:24:17.689 }, 00:24:17.689 "qid": 0, 00:24:17.689 "state": "enabled" 00:24:17.689 } 00:24:17.689 ]' 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:24:17.689 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:17.947 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:17.947 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:17.947 13:42:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:18.205 13:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:19.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:19.140 13:42:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:19.398 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:19.964 00:24:19.964 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:19.964 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:19.964 13:42:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:20.223 { 00:24:20.223 "auth": { 00:24:20.223 "dhgroup": "ffdhe6144", 00:24:20.223 "digest": "sha384", 00:24:20.223 "state": "completed" 00:24:20.223 }, 00:24:20.223 "cntlid": 81, 00:24:20.223 "listen_address": { 00:24:20.223 "adrfam": "IPv4", 00:24:20.223 "traddr": "10.0.0.2", 00:24:20.223 "trsvcid": "4420", 00:24:20.223 "trtype": "TCP" 00:24:20.223 }, 00:24:20.223 "peer_address": { 00:24:20.223 "adrfam": "IPv4", 00:24:20.223 "traddr": "10.0.0.1", 00:24:20.223 "trsvcid": "59098", 00:24:20.223 "trtype": "TCP" 00:24:20.223 }, 00:24:20.223 "qid": 0, 00:24:20.223 "state": "enabled" 00:24:20.223 } 00:24:20.223 ]' 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:20.223 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:20.482 13:42:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:21.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:21.414 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:21.671 13:42:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:21.930 00:24:21.930 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:21.930 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:21.930 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:22.495 { 00:24:22.495 "auth": { 00:24:22.495 "dhgroup": "ffdhe6144", 00:24:22.495 "digest": "sha384", 00:24:22.495 "state": "completed" 00:24:22.495 }, 00:24:22.495 "cntlid": 83, 00:24:22.495 "listen_address": { 00:24:22.495 "adrfam": "IPv4", 00:24:22.495 "traddr": "10.0.0.2", 00:24:22.495 "trsvcid": "4420", 00:24:22.495 "trtype": "TCP" 00:24:22.495 }, 00:24:22.495 "peer_address": { 00:24:22.495 "adrfam": "IPv4", 00:24:22.495 "traddr": "10.0.0.1", 00:24:22.495 "trsvcid": "59134", 00:24:22.495 "trtype": "TCP" 00:24:22.495 }, 00:24:22.495 "qid": 0, 00:24:22.495 "state": "enabled" 00:24:22.495 } 00:24:22.495 ]' 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:22.495 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:22.753 13:42:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:23.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.687 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:23.945 13:42:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:24.512 00:24:24.512 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:24.512 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:24.512 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:24.512 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.512 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:24.512 13:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.512 13:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:24.769 { 00:24:24.769 "auth": { 00:24:24.769 "dhgroup": "ffdhe6144", 00:24:24.769 "digest": "sha384", 00:24:24.769 "state": "completed" 00:24:24.769 }, 00:24:24.769 "cntlid": 85, 00:24:24.769 "listen_address": { 00:24:24.769 "adrfam": "IPv4", 00:24:24.769 "traddr": "10.0.0.2", 00:24:24.769 "trsvcid": "4420", 00:24:24.769 "trtype": "TCP" 00:24:24.769 }, 00:24:24.769 "peer_address": { 00:24:24.769 "adrfam": "IPv4", 00:24:24.769 "traddr": "10.0.0.1", 00:24:24.769 "trsvcid": "59154", 00:24:24.769 "trtype": "TCP" 00:24:24.769 }, 00:24:24.769 "qid": 0, 00:24:24.769 "state": "enabled" 00:24:24.769 } 00:24:24.769 ]' 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:24.769 13:42:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:25.027 13:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:25.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:25.961 13:42:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.219 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:26.850 00:24:26.850 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:26.850 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:26.850 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:27.114 { 00:24:27.114 "auth": { 00:24:27.114 "dhgroup": "ffdhe6144", 00:24:27.114 "digest": "sha384", 00:24:27.114 "state": "completed" 00:24:27.114 }, 00:24:27.114 "cntlid": 87, 00:24:27.114 "listen_address": { 00:24:27.114 "adrfam": "IPv4", 00:24:27.114 "traddr": "10.0.0.2", 00:24:27.114 "trsvcid": "4420", 00:24:27.114 "trtype": "TCP" 00:24:27.114 }, 00:24:27.114 "peer_address": { 00:24:27.114 "adrfam": "IPv4", 00:24:27.114 "traddr": "10.0.0.1", 00:24:27.114 "trsvcid": "36286", 00:24:27.114 "trtype": "TCP" 00:24:27.114 }, 00:24:27.114 "qid": 0, 00:24:27.114 "state": "enabled" 00:24:27.114 } 00:24:27.114 ]' 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:27.114 13:42:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:27.114 13:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:24:27.114 13:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:27.114 13:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:27.114 13:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:27.114 13:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:27.372 13:42:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:24:27.938 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:27.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:27.938 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:27.938 13:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.938 13:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.195 13:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.195 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.195 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:28.195 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.195 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:28.453 13:42:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:29.017 00:24:29.017 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:29.017 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:29.017 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:29.275 { 00:24:29.275 "auth": { 00:24:29.275 "dhgroup": "ffdhe8192", 00:24:29.275 "digest": "sha384", 00:24:29.275 "state": "completed" 00:24:29.275 }, 00:24:29.275 "cntlid": 89, 00:24:29.275 "listen_address": { 00:24:29.275 "adrfam": "IPv4", 00:24:29.275 "traddr": "10.0.0.2", 00:24:29.275 "trsvcid": "4420", 00:24:29.275 "trtype": "TCP" 00:24:29.275 }, 00:24:29.275 "peer_address": { 00:24:29.275 "adrfam": "IPv4", 00:24:29.275 "traddr": "10.0.0.1", 00:24:29.275 "trsvcid": "36316", 00:24:29.275 "trtype": "TCP" 00:24:29.275 }, 00:24:29.275 "qid": 0, 00:24:29.275 "state": "enabled" 00:24:29.275 } 00:24:29.275 ]' 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:29.275 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:29.531 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:29.531 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:29.531 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:29.531 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:29.531 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:29.789 13:42:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:30.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:30.724 13:42:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:31.290 00:24:31.290 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:31.290 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:31.290 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:31.855 { 00:24:31.855 "auth": { 00:24:31.855 "dhgroup": "ffdhe8192", 00:24:31.855 "digest": "sha384", 00:24:31.855 "state": "completed" 00:24:31.855 }, 00:24:31.855 "cntlid": 91, 00:24:31.855 "listen_address": { 00:24:31.855 "adrfam": "IPv4", 00:24:31.855 "traddr": "10.0.0.2", 00:24:31.855 "trsvcid": "4420", 00:24:31.855 "trtype": "TCP" 00:24:31.855 }, 00:24:31.855 "peer_address": { 00:24:31.855 "adrfam": "IPv4", 00:24:31.855 "traddr": "10.0.0.1", 00:24:31.855 "trsvcid": "36336", 00:24:31.855 "trtype": "TCP" 00:24:31.855 }, 00:24:31.855 "qid": 0, 00:24:31.855 "state": "enabled" 00:24:31.855 } 00:24:31.855 ]' 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:31.855 13:42:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:32.113 13:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:33.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.102 13:42:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:33.102 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:34.035 00:24:34.035 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:34.035 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:34.035 13:42:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:34.293 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.293 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:34.293 13:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.293 13:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:34.293 13:42:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.293 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:34.293 { 00:24:34.293 "auth": { 00:24:34.293 "dhgroup": "ffdhe8192", 00:24:34.293 "digest": "sha384", 00:24:34.293 "state": "completed" 00:24:34.293 }, 00:24:34.293 "cntlid": 93, 00:24:34.293 "listen_address": { 00:24:34.294 "adrfam": "IPv4", 00:24:34.294 "traddr": "10.0.0.2", 00:24:34.294 "trsvcid": "4420", 00:24:34.294 "trtype": "TCP" 00:24:34.294 }, 00:24:34.294 "peer_address": { 00:24:34.294 "adrfam": "IPv4", 00:24:34.294 "traddr": "10.0.0.1", 00:24:34.294 "trsvcid": "36372", 00:24:34.294 "trtype": "TCP" 00:24:34.294 }, 00:24:34.294 "qid": 0, 00:24:34.294 "state": "enabled" 00:24:34.294 } 00:24:34.294 ]' 00:24:34.294 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:34.294 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:34.294 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:34.294 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:34.294 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:34.552 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:34.552 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:34.552 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:34.810 13:42:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:24:35.376 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:35.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:35.376 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:35.376 13:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.377 13:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.377 13:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.377 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:35.377 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:35.377 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:35.635 13:42:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:36.569 00:24:36.569 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:36.569 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:36.569 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:36.828 { 00:24:36.828 "auth": { 00:24:36.828 "dhgroup": "ffdhe8192", 00:24:36.828 "digest": "sha384", 00:24:36.828 "state": "completed" 00:24:36.828 }, 00:24:36.828 "cntlid": 95, 00:24:36.828 "listen_address": { 00:24:36.828 "adrfam": "IPv4", 00:24:36.828 "traddr": "10.0.0.2", 00:24:36.828 "trsvcid": "4420", 00:24:36.828 "trtype": "TCP" 00:24:36.828 }, 00:24:36.828 "peer_address": { 00:24:36.828 "adrfam": "IPv4", 00:24:36.828 "traddr": "10.0.0.1", 00:24:36.828 "trsvcid": "45104", 00:24:36.828 "trtype": "TCP" 00:24:36.828 }, 00:24:36.828 "qid": 0, 00:24:36.828 "state": "enabled" 00:24:36.828 } 00:24:36.828 ]' 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:36.828 13:42:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:37.086 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:38.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:38.023 13:42:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.023 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:38.024 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:38.281 00:24:38.538 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:38.538 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:38.538 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:38.797 { 00:24:38.797 "auth": { 00:24:38.797 "dhgroup": "null", 00:24:38.797 "digest": "sha512", 00:24:38.797 "state": "completed" 00:24:38.797 }, 00:24:38.797 "cntlid": 97, 00:24:38.797 "listen_address": { 00:24:38.797 "adrfam": "IPv4", 00:24:38.797 "traddr": "10.0.0.2", 00:24:38.797 "trsvcid": "4420", 00:24:38.797 "trtype": "TCP" 00:24:38.797 }, 00:24:38.797 "peer_address": { 00:24:38.797 "adrfam": "IPv4", 00:24:38.797 "traddr": "10.0.0.1", 00:24:38.797 "trsvcid": "45146", 00:24:38.797 "trtype": "TCP" 00:24:38.797 }, 00:24:38.797 "qid": 0, 00:24:38.797 "state": "enabled" 00:24:38.797 } 00:24:38.797 ]' 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:38.797 13:42:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:39.055 13:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:39.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:39.988 13:42:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:40.246 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:40.504 00:24:40.504 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:40.504 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:40.504 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:40.762 { 00:24:40.762 "auth": { 00:24:40.762 "dhgroup": "null", 00:24:40.762 "digest": "sha512", 00:24:40.762 "state": "completed" 00:24:40.762 }, 00:24:40.762 "cntlid": 99, 00:24:40.762 "listen_address": { 00:24:40.762 "adrfam": "IPv4", 00:24:40.762 "traddr": "10.0.0.2", 00:24:40.762 "trsvcid": "4420", 00:24:40.762 "trtype": "TCP" 00:24:40.762 }, 00:24:40.762 "peer_address": { 00:24:40.762 "adrfam": "IPv4", 00:24:40.762 "traddr": "10.0.0.1", 00:24:40.762 "trsvcid": "45186", 00:24:40.762 "trtype": "TCP" 00:24:40.762 }, 00:24:40.762 "qid": 0, 00:24:40.762 "state": "enabled" 00:24:40.762 } 00:24:40.762 ]' 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:40.762 13:42:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:41.327 13:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:24:41.891 13:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:41.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:41.892 13:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:41.892 13:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.892 13:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:41.892 13:42:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.892 13:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:41.892 13:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:41.892 13:42:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:42.149 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:42.407 00:24:42.665 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:42.665 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:42.665 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:42.922 { 00:24:42.922 "auth": { 00:24:42.922 "dhgroup": "null", 00:24:42.922 "digest": "sha512", 00:24:42.922 "state": "completed" 00:24:42.922 }, 00:24:42.922 "cntlid": 101, 00:24:42.922 "listen_address": { 00:24:42.922 "adrfam": "IPv4", 00:24:42.922 "traddr": "10.0.0.2", 00:24:42.922 "trsvcid": "4420", 00:24:42.922 "trtype": "TCP" 00:24:42.922 }, 00:24:42.922 "peer_address": { 00:24:42.922 "adrfam": "IPv4", 00:24:42.922 "traddr": "10.0.0.1", 00:24:42.922 "trsvcid": "45198", 00:24:42.922 "trtype": "TCP" 00:24:42.922 }, 00:24:42.922 "qid": 0, 00:24:42.922 "state": "enabled" 00:24:42.922 } 00:24:42.922 ]' 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:42.922 13:42:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:43.181 13:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:24:44.115 13:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:44.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:44.115 13:42:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:44.115 13:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.115 13:42:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.115 13:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.115 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:44.115 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:44.115 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:44.373 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:44.631 00:24:44.888 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:44.888 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:44.888 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:45.147 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.147 13:42:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:45.147 13:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.147 13:42:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:45.147 { 00:24:45.147 "auth": { 00:24:45.147 "dhgroup": "null", 00:24:45.147 "digest": "sha512", 00:24:45.147 "state": "completed" 00:24:45.147 }, 00:24:45.147 "cntlid": 103, 00:24:45.147 "listen_address": { 00:24:45.147 "adrfam": "IPv4", 00:24:45.147 "traddr": "10.0.0.2", 00:24:45.147 "trsvcid": "4420", 00:24:45.147 "trtype": "TCP" 00:24:45.147 }, 00:24:45.147 "peer_address": { 00:24:45.147 "adrfam": "IPv4", 00:24:45.147 "traddr": "10.0.0.1", 00:24:45.147 "trsvcid": "45220", 00:24:45.147 "trtype": "TCP" 00:24:45.147 }, 00:24:45.147 "qid": 0, 00:24:45.147 "state": "enabled" 00:24:45.147 } 00:24:45.147 ]' 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:45.147 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:45.405 13:42:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:46.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:46.339 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:46.903 00:24:46.903 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:46.903 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:46.903 13:42:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:47.161 { 00:24:47.161 "auth": { 00:24:47.161 "dhgroup": "ffdhe2048", 00:24:47.161 "digest": "sha512", 00:24:47.161 "state": "completed" 00:24:47.161 }, 00:24:47.161 "cntlid": 105, 00:24:47.161 "listen_address": { 00:24:47.161 "adrfam": "IPv4", 00:24:47.161 "traddr": "10.0.0.2", 00:24:47.161 "trsvcid": "4420", 00:24:47.161 "trtype": "TCP" 00:24:47.161 }, 00:24:47.161 "peer_address": { 00:24:47.161 "adrfam": "IPv4", 00:24:47.161 "traddr": "10.0.0.1", 00:24:47.161 "trsvcid": "46968", 00:24:47.161 "trtype": "TCP" 00:24:47.161 }, 00:24:47.161 "qid": 0, 00:24:47.161 "state": "enabled" 00:24:47.161 } 00:24:47.161 ]' 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:47.161 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:47.419 13:43:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:48.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:48.352 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:48.917 00:24:48.917 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:48.917 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:48.917 13:43:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:49.176 { 00:24:49.176 "auth": { 00:24:49.176 "dhgroup": "ffdhe2048", 00:24:49.176 "digest": "sha512", 00:24:49.176 "state": "completed" 00:24:49.176 }, 00:24:49.176 "cntlid": 107, 00:24:49.176 "listen_address": { 00:24:49.176 "adrfam": "IPv4", 00:24:49.176 "traddr": "10.0.0.2", 00:24:49.176 "trsvcid": "4420", 00:24:49.176 "trtype": "TCP" 00:24:49.176 }, 00:24:49.176 "peer_address": { 00:24:49.176 "adrfam": "IPv4", 00:24:49.176 "traddr": "10.0.0.1", 00:24:49.176 "trsvcid": "46996", 00:24:49.176 "trtype": "TCP" 00:24:49.176 }, 00:24:49.176 "qid": 0, 00:24:49.176 "state": "enabled" 00:24:49.176 } 00:24:49.176 ]' 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:49.176 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:49.434 13:43:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:50.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:50.369 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:50.627 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:50.983 00:24:50.983 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:50.983 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:50.983 13:43:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:51.242 { 00:24:51.242 "auth": { 00:24:51.242 "dhgroup": "ffdhe2048", 00:24:51.242 "digest": "sha512", 00:24:51.242 "state": "completed" 00:24:51.242 }, 00:24:51.242 "cntlid": 109, 00:24:51.242 "listen_address": { 00:24:51.242 "adrfam": "IPv4", 00:24:51.242 "traddr": "10.0.0.2", 00:24:51.242 "trsvcid": "4420", 00:24:51.242 "trtype": "TCP" 00:24:51.242 }, 00:24:51.242 "peer_address": { 00:24:51.242 "adrfam": "IPv4", 00:24:51.242 "traddr": "10.0.0.1", 00:24:51.242 "trsvcid": "47020", 00:24:51.242 "trtype": "TCP" 00:24:51.242 }, 00:24:51.242 "qid": 0, 00:24:51.242 "state": "enabled" 00:24:51.242 } 00:24:51.242 ]' 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:51.242 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:51.501 13:43:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:52.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:52.068 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:52.327 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:24:52.586 00:24:52.844 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:52.844 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:52.844 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:53.103 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.103 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:53.103 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.103 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:53.103 13:43:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.103 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:53.103 { 00:24:53.103 "auth": { 00:24:53.103 "dhgroup": "ffdhe2048", 00:24:53.103 "digest": "sha512", 00:24:53.103 "state": "completed" 00:24:53.103 }, 00:24:53.103 "cntlid": 111, 00:24:53.103 "listen_address": { 00:24:53.103 "adrfam": "IPv4", 00:24:53.103 "traddr": "10.0.0.2", 00:24:53.103 "trsvcid": "4420", 00:24:53.103 "trtype": "TCP" 00:24:53.103 }, 00:24:53.103 "peer_address": { 00:24:53.103 "adrfam": "IPv4", 00:24:53.103 "traddr": "10.0.0.1", 00:24:53.103 "trsvcid": "47052", 00:24:53.103 "trtype": "TCP" 00:24:53.103 }, 00:24:53.103 "qid": 0, 00:24:53.103 "state": "enabled" 00:24:53.103 } 00:24:53.103 ]' 00:24:53.103 13:43:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:53.103 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:53.103 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:53.103 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:24:53.103 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:53.103 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:53.103 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:53.103 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:53.361 13:43:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:24:54.296 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:54.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:54.296 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:54.296 13:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.296 13:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.296 13:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.296 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.297 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:54.297 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:54.297 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:54.555 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:24:54.814 00:24:54.814 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:54.814 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:54.814 13:43:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:55.072 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.072 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:55.072 13:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.072 13:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:55.072 13:43:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.072 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:55.072 { 00:24:55.072 "auth": { 00:24:55.072 "dhgroup": "ffdhe3072", 00:24:55.072 "digest": "sha512", 00:24:55.072 "state": "completed" 00:24:55.072 }, 00:24:55.072 "cntlid": 113, 00:24:55.072 "listen_address": { 00:24:55.072 "adrfam": "IPv4", 00:24:55.072 "traddr": "10.0.0.2", 00:24:55.072 "trsvcid": "4420", 00:24:55.072 "trtype": "TCP" 00:24:55.072 }, 00:24:55.072 "peer_address": { 00:24:55.072 "adrfam": "IPv4", 00:24:55.072 "traddr": "10.0.0.1", 00:24:55.072 "trsvcid": "47080", 00:24:55.072 "trtype": "TCP" 00:24:55.072 }, 00:24:55.072 "qid": 0, 00:24:55.072 "state": "enabled" 00:24:55.072 } 00:24:55.072 ]' 00:24:55.072 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:55.335 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:55.335 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:55.335 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:55.335 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:55.335 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:55.335 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:55.335 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:55.594 13:43:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:56.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:56.529 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:24:57.095 00:24:57.095 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:57.095 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:57.095 13:43:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:57.353 { 00:24:57.353 "auth": { 00:24:57.353 "dhgroup": "ffdhe3072", 00:24:57.353 "digest": "sha512", 00:24:57.353 "state": "completed" 00:24:57.353 }, 00:24:57.353 "cntlid": 115, 00:24:57.353 "listen_address": { 00:24:57.353 "adrfam": "IPv4", 00:24:57.353 "traddr": "10.0.0.2", 00:24:57.353 "trsvcid": "4420", 00:24:57.353 "trtype": "TCP" 00:24:57.353 }, 00:24:57.353 "peer_address": { 00:24:57.353 "adrfam": "IPv4", 00:24:57.353 "traddr": "10.0.0.1", 00:24:57.353 "trsvcid": "50940", 00:24:57.353 "trtype": "TCP" 00:24:57.353 }, 00:24:57.353 "qid": 0, 00:24:57.353 "state": "enabled" 00:24:57.353 } 00:24:57.353 ]' 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:57.353 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:57.611 13:43:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:58.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:58.545 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:58.804 13:43:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:24:59.062 00:24:59.062 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:24:59.062 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:24:59.062 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:24:59.320 { 00:24:59.320 "auth": { 00:24:59.320 "dhgroup": "ffdhe3072", 00:24:59.320 "digest": "sha512", 00:24:59.320 "state": "completed" 00:24:59.320 }, 00:24:59.320 "cntlid": 117, 00:24:59.320 "listen_address": { 00:24:59.320 "adrfam": "IPv4", 00:24:59.320 "traddr": "10.0.0.2", 00:24:59.320 "trsvcid": "4420", 00:24:59.320 "trtype": "TCP" 00:24:59.320 }, 00:24:59.320 "peer_address": { 00:24:59.320 "adrfam": "IPv4", 00:24:59.320 "traddr": "10.0.0.1", 00:24:59.320 "trsvcid": "50952", 00:24:59.320 "trtype": "TCP" 00:24:59.320 }, 00:24:59.320 "qid": 0, 00:24:59.320 "state": "enabled" 00:24:59.320 } 00:24:59.320 ]' 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:24:59.320 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:24:59.579 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:24:59.579 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:24:59.579 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:24:59.579 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:59.579 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:59.836 13:43:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:00.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.401 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:00.658 13:43:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:01.222 00:25:01.222 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:01.222 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:01.222 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:01.480 { 00:25:01.480 "auth": { 00:25:01.480 "dhgroup": "ffdhe3072", 00:25:01.480 "digest": "sha512", 00:25:01.480 "state": "completed" 00:25:01.480 }, 00:25:01.480 "cntlid": 119, 00:25:01.480 "listen_address": { 00:25:01.480 "adrfam": "IPv4", 00:25:01.480 "traddr": "10.0.0.2", 00:25:01.480 "trsvcid": "4420", 00:25:01.480 "trtype": "TCP" 00:25:01.480 }, 00:25:01.480 "peer_address": { 00:25:01.480 "adrfam": "IPv4", 00:25:01.480 "traddr": "10.0.0.1", 00:25:01.480 "trsvcid": "50968", 00:25:01.480 "trtype": "TCP" 00:25:01.480 }, 00:25:01.480 "qid": 0, 00:25:01.480 "state": "enabled" 00:25:01.480 } 00:25:01.480 ]' 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:01.480 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:02.046 13:43:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:02.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.611 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:02.870 13:43:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:03.128 00:25:03.128 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:03.128 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:03.128 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:03.386 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.386 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:03.386 13:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.386 13:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:03.386 13:43:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.386 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:03.386 { 00:25:03.386 "auth": { 00:25:03.386 "dhgroup": "ffdhe4096", 00:25:03.386 "digest": "sha512", 00:25:03.386 "state": "completed" 00:25:03.386 }, 00:25:03.386 "cntlid": 121, 00:25:03.386 "listen_address": { 00:25:03.386 "adrfam": "IPv4", 00:25:03.386 "traddr": "10.0.0.2", 00:25:03.386 "trsvcid": "4420", 00:25:03.386 "trtype": "TCP" 00:25:03.386 }, 00:25:03.386 "peer_address": { 00:25:03.386 "adrfam": "IPv4", 00:25:03.386 "traddr": "10.0.0.1", 00:25:03.386 "trsvcid": "50998", 00:25:03.386 "trtype": "TCP" 00:25:03.386 }, 00:25:03.386 "qid": 0, 00:25:03.386 "state": "enabled" 00:25:03.386 } 00:25:03.386 ]' 00:25:03.386 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:03.644 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:03.644 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:03.644 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:03.644 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:03.644 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:03.644 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:03.644 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:03.902 13:43:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:04.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:04.470 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:04.728 13:43:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:04.986 00:25:05.244 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:05.244 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:05.244 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:05.503 { 00:25:05.503 "auth": { 00:25:05.503 "dhgroup": "ffdhe4096", 00:25:05.503 "digest": "sha512", 00:25:05.503 "state": "completed" 00:25:05.503 }, 00:25:05.503 "cntlid": 123, 00:25:05.503 "listen_address": { 00:25:05.503 "adrfam": "IPv4", 00:25:05.503 "traddr": "10.0.0.2", 00:25:05.503 "trsvcid": "4420", 00:25:05.503 "trtype": "TCP" 00:25:05.503 }, 00:25:05.503 "peer_address": { 00:25:05.503 "adrfam": "IPv4", 00:25:05.503 "traddr": "10.0.0.1", 00:25:05.503 "trsvcid": "51032", 00:25:05.503 "trtype": "TCP" 00:25:05.503 }, 00:25:05.503 "qid": 0, 00:25:05.503 "state": "enabled" 00:25:05.503 } 00:25:05.503 ]' 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:05.503 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:05.761 13:43:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:06.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:06.695 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:06.953 13:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.954 13:43:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:06.954 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:06.954 13:43:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:07.519 00:25:07.519 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:07.519 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:07.519 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:07.776 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.776 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:07.776 13:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.776 13:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:07.776 13:43:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.776 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:07.776 { 00:25:07.776 "auth": { 00:25:07.777 "dhgroup": "ffdhe4096", 00:25:07.777 "digest": "sha512", 00:25:07.777 "state": "completed" 00:25:07.777 }, 00:25:07.777 "cntlid": 125, 00:25:07.777 "listen_address": { 00:25:07.777 "adrfam": "IPv4", 00:25:07.777 "traddr": "10.0.0.2", 00:25:07.777 "trsvcid": "4420", 00:25:07.777 "trtype": "TCP" 00:25:07.777 }, 00:25:07.777 "peer_address": { 00:25:07.777 "adrfam": "IPv4", 00:25:07.777 "traddr": "10.0.0.1", 00:25:07.777 "trsvcid": "52952", 00:25:07.777 "trtype": "TCP" 00:25:07.777 }, 00:25:07.777 "qid": 0, 00:25:07.777 "state": "enabled" 00:25:07.777 } 00:25:07.777 ]' 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:07.777 13:43:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:08.034 13:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:08.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.981 13:43:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:08.981 13:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.239 13:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.239 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:09.239 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:09.498 00:25:09.498 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:09.498 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:09.498 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:09.756 { 00:25:09.756 "auth": { 00:25:09.756 "dhgroup": "ffdhe4096", 00:25:09.756 "digest": "sha512", 00:25:09.756 "state": "completed" 00:25:09.756 }, 00:25:09.756 "cntlid": 127, 00:25:09.756 "listen_address": { 00:25:09.756 "adrfam": "IPv4", 00:25:09.756 "traddr": "10.0.0.2", 00:25:09.756 "trsvcid": "4420", 00:25:09.756 "trtype": "TCP" 00:25:09.756 }, 00:25:09.756 "peer_address": { 00:25:09.756 "adrfam": "IPv4", 00:25:09.756 "traddr": "10.0.0.1", 00:25:09.756 "trsvcid": "52984", 00:25:09.756 "trtype": "TCP" 00:25:09.756 }, 00:25:09.756 "qid": 0, 00:25:09.756 "state": "enabled" 00:25:09.756 } 00:25:09.756 ]' 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:09.756 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:10.014 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:25:10.014 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:10.014 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:10.014 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:10.014 13:43:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:10.272 13:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:10.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:10.839 13:43:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:11.097 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:25:11.097 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:11.098 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:11.664 00:25:11.664 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:11.664 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:11.664 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:11.923 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.923 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:11.923 13:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:11.923 13:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.923 13:43:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:11.923 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:11.923 { 00:25:11.923 "auth": { 00:25:11.923 "dhgroup": "ffdhe6144", 00:25:11.923 "digest": "sha512", 00:25:11.923 "state": "completed" 00:25:11.923 }, 00:25:11.923 "cntlid": 129, 00:25:11.923 "listen_address": { 00:25:11.923 "adrfam": "IPv4", 00:25:11.923 "traddr": "10.0.0.2", 00:25:11.923 "trsvcid": "4420", 00:25:11.923 "trtype": "TCP" 00:25:11.923 }, 00:25:11.923 "peer_address": { 00:25:11.923 "adrfam": "IPv4", 00:25:11.923 "traddr": "10.0.0.1", 00:25:11.923 "trsvcid": "53002", 00:25:11.923 "trtype": "TCP" 00:25:11.923 }, 00:25:11.923 "qid": 0, 00:25:11.923 "state": "enabled" 00:25:11.923 } 00:25:11.923 ]' 00:25:11.923 13:43:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:11.923 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:11.923 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:12.181 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:12.181 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:12.181 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:12.181 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:12.181 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:12.439 13:43:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:25:13.005 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:13.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.333 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:13.334 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:13.898 00:25:13.898 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:13.898 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:13.898 13:43:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:14.156 { 00:25:14.156 "auth": { 00:25:14.156 "dhgroup": "ffdhe6144", 00:25:14.156 "digest": "sha512", 00:25:14.156 "state": "completed" 00:25:14.156 }, 00:25:14.156 "cntlid": 131, 00:25:14.156 "listen_address": { 00:25:14.156 "adrfam": "IPv4", 00:25:14.156 "traddr": "10.0.0.2", 00:25:14.156 "trsvcid": "4420", 00:25:14.156 "trtype": "TCP" 00:25:14.156 }, 00:25:14.156 "peer_address": { 00:25:14.156 "adrfam": "IPv4", 00:25:14.156 "traddr": "10.0.0.1", 00:25:14.156 "trsvcid": "53030", 00:25:14.156 "trtype": "TCP" 00:25:14.156 }, 00:25:14.156 "qid": 0, 00:25:14.156 "state": "enabled" 00:25:14.156 } 00:25:14.156 ]' 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:14.156 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:14.413 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:14.414 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:14.414 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:14.671 13:43:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:15.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.239 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:15.806 13:43:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:16.064 00:25:16.064 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:16.064 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:16.064 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:16.322 { 00:25:16.322 "auth": { 00:25:16.322 "dhgroup": "ffdhe6144", 00:25:16.322 "digest": "sha512", 00:25:16.322 "state": "completed" 00:25:16.322 }, 00:25:16.322 "cntlid": 133, 00:25:16.322 "listen_address": { 00:25:16.322 "adrfam": "IPv4", 00:25:16.322 "traddr": "10.0.0.2", 00:25:16.322 "trsvcid": "4420", 00:25:16.322 "trtype": "TCP" 00:25:16.322 }, 00:25:16.322 "peer_address": { 00:25:16.322 "adrfam": "IPv4", 00:25:16.322 "traddr": "10.0.0.1", 00:25:16.322 "trsvcid": "49952", 00:25:16.322 "trtype": "TCP" 00:25:16.322 }, 00:25:16.322 "qid": 0, 00:25:16.322 "state": "enabled" 00:25:16.322 } 00:25:16.322 ]' 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:16.322 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:16.580 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:16.580 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:16.580 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:16.580 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:16.580 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:16.838 13:43:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:17.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:17.404 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:17.662 13:43:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:18.228 00:25:18.228 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:18.228 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:18.228 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:18.486 { 00:25:18.486 "auth": { 00:25:18.486 "dhgroup": "ffdhe6144", 00:25:18.486 "digest": "sha512", 00:25:18.486 "state": "completed" 00:25:18.486 }, 00:25:18.486 "cntlid": 135, 00:25:18.486 "listen_address": { 00:25:18.486 "adrfam": "IPv4", 00:25:18.486 "traddr": "10.0.0.2", 00:25:18.486 "trsvcid": "4420", 00:25:18.486 "trtype": "TCP" 00:25:18.486 }, 00:25:18.486 "peer_address": { 00:25:18.486 "adrfam": "IPv4", 00:25:18.486 "traddr": "10.0.0.1", 00:25:18.486 "trsvcid": "49992", 00:25:18.486 "trtype": "TCP" 00:25:18.486 }, 00:25:18.486 "qid": 0, 00:25:18.486 "state": "enabled" 00:25:18.486 } 00:25:18.486 ]' 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:18.486 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:18.744 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:25:18.744 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:18.744 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:18.744 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:18.744 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:19.002 13:43:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:19.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:19.569 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:19.828 13:43:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:20.394 00:25:20.651 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:20.651 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:20.651 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:20.651 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.651 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:20.651 13:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:20.651 13:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.909 13:43:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:20.909 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:20.909 { 00:25:20.909 "auth": { 00:25:20.909 "dhgroup": "ffdhe8192", 00:25:20.909 "digest": "sha512", 00:25:20.909 "state": "completed" 00:25:20.909 }, 00:25:20.909 "cntlid": 137, 00:25:20.909 "listen_address": { 00:25:20.909 "adrfam": "IPv4", 00:25:20.909 "traddr": "10.0.0.2", 00:25:20.909 "trsvcid": "4420", 00:25:20.909 "trtype": "TCP" 00:25:20.909 }, 00:25:20.909 "peer_address": { 00:25:20.909 "adrfam": "IPv4", 00:25:20.909 "traddr": "10.0.0.1", 00:25:20.909 "trsvcid": "50018", 00:25:20.909 "trtype": "TCP" 00:25:20.909 }, 00:25:20.909 "qid": 0, 00:25:20.910 "state": "enabled" 00:25:20.910 } 00:25:20.910 ]' 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:20.910 13:43:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:21.167 13:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:21.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:21.732 13:43:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:22.299 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:25:22.901 00:25:22.901 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:22.901 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:22.901 13:43:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:23.161 { 00:25:23.161 "auth": { 00:25:23.161 "dhgroup": "ffdhe8192", 00:25:23.161 "digest": "sha512", 00:25:23.161 "state": "completed" 00:25:23.161 }, 00:25:23.161 "cntlid": 139, 00:25:23.161 "listen_address": { 00:25:23.161 "adrfam": "IPv4", 00:25:23.161 "traddr": "10.0.0.2", 00:25:23.161 "trsvcid": "4420", 00:25:23.161 "trtype": "TCP" 00:25:23.161 }, 00:25:23.161 "peer_address": { 00:25:23.161 "adrfam": "IPv4", 00:25:23.161 "traddr": "10.0.0.1", 00:25:23.161 "trsvcid": "50054", 00:25:23.161 "trtype": "TCP" 00:25:23.161 }, 00:25:23.161 "qid": 0, 00:25:23.161 "state": "enabled" 00:25:23.161 } 00:25:23.161 ]' 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:23.161 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:23.162 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:23.162 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:23.162 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:23.727 13:43:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:01:OWYyYWVkZjg0OTQ0NmRlNzRjMzFkNjA4NWM5NzRlN2asxnOx: 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:24.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:24.294 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key2 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:24.552 13:43:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:25.119 00:25:25.119 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:25.119 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:25.119 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:25.378 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.378 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:25.378 13:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:25.378 13:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:25.378 13:43:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:25.378 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:25.378 { 00:25:25.378 "auth": { 00:25:25.378 "dhgroup": "ffdhe8192", 00:25:25.378 "digest": "sha512", 00:25:25.378 "state": "completed" 00:25:25.378 }, 00:25:25.378 "cntlid": 141, 00:25:25.378 "listen_address": { 00:25:25.378 "adrfam": "IPv4", 00:25:25.378 "traddr": "10.0.0.2", 00:25:25.378 "trsvcid": "4420", 00:25:25.378 "trtype": "TCP" 00:25:25.378 }, 00:25:25.378 "peer_address": { 00:25:25.378 "adrfam": "IPv4", 00:25:25.378 "traddr": "10.0.0.1", 00:25:25.378 "trsvcid": "50086", 00:25:25.378 "trtype": "TCP" 00:25:25.378 }, 00:25:25.378 "qid": 0, 00:25:25.378 "state": "enabled" 00:25:25.378 } 00:25:25.378 ]' 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:25.637 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:25.896 13:43:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:02:MzdlN2M4YjdhMGE5MjliYmI1N2NiNzc1ODhmNzFkMGNmOTg3MzU0YTI0NmU3ODRjoPxE/A==: 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:26.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:26.463 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key3 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:27.029 13:43:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:25:27.610 00:25:27.610 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:27.610 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:27.610 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:27.869 { 00:25:27.869 "auth": { 00:25:27.869 "dhgroup": "ffdhe8192", 00:25:27.869 "digest": "sha512", 00:25:27.869 "state": "completed" 00:25:27.869 }, 00:25:27.869 "cntlid": 143, 00:25:27.869 "listen_address": { 00:25:27.869 "adrfam": "IPv4", 00:25:27.869 "traddr": "10.0.0.2", 00:25:27.869 "trsvcid": "4420", 00:25:27.869 "trtype": "TCP" 00:25:27.869 }, 00:25:27.869 "peer_address": { 00:25:27.869 "adrfam": "IPv4", 00:25:27.869 "traddr": "10.0.0.1", 00:25:27.869 "trsvcid": "44684", 00:25:27.869 "trtype": "TCP" 00:25:27.869 }, 00:25:27.869 "qid": 0, 00:25:27.869 "state": "enabled" 00:25:27.869 } 00:25:27.869 ]' 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:27.869 13:43:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:28.435 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:03:OGRkYmMzMDE1MmUxOGUwNDVhYTk0MmUzZTBhYWI1Njk4MTJhMGU1M2FkZTgyYjI0ZTMyZmNjZjA4NGNmNTFiNAj3OgM=: 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:29.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:29.001 13:43:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key0 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:29.258 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:25:29.824 00:25:30.083 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:25:30.083 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:25:30.083 13:43:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:25:30.341 { 00:25:30.341 "auth": { 00:25:30.341 "dhgroup": "ffdhe8192", 00:25:30.341 "digest": "sha512", 00:25:30.341 "state": "completed" 00:25:30.341 }, 00:25:30.341 "cntlid": 145, 00:25:30.341 "listen_address": { 00:25:30.341 "adrfam": "IPv4", 00:25:30.341 "traddr": "10.0.0.2", 00:25:30.341 "trsvcid": "4420", 00:25:30.341 "trtype": "TCP" 00:25:30.341 }, 00:25:30.341 "peer_address": { 00:25:30.341 "adrfam": "IPv4", 00:25:30.341 "traddr": "10.0.0.1", 00:25:30.341 "trsvcid": "44710", 00:25:30.341 "trtype": "TCP" 00:25:30.341 }, 00:25:30.341 "qid": 0, 00:25:30.341 "state": "enabled" 00:25:30.341 } 00:25:30.341 ]' 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:25:30.341 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:25:30.908 13:43:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid 1922f591-978b-44b0-bc45-c969115d53dd --dhchap-secret DHHC-1:00:YzIwYTM0MjdlZTU5MmFmMWM5ODU3MmUyODc3YzllZGFkNzExNDQ1NDgwZmUyMzM3sT/bFQ==: 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:25:31.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --dhchap-key key1 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:31.473 13:43:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:25:32.072 2024/05/15 13:43:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd name:nvme0 subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:32.072 request: 00:25:32.072 { 00:25:32.072 "method": "bdev_nvme_attach_controller", 00:25:32.072 "params": { 00:25:32.072 "name": "nvme0", 00:25:32.072 "trtype": "tcp", 00:25:32.072 "traddr": "10.0.0.2", 00:25:32.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd", 00:25:32.072 "adrfam": "ipv4", 00:25:32.072 "trsvcid": "4420", 00:25:32.072 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:25:32.072 "dhchap_key": "key2" 00:25:32.072 } 00:25:32.072 } 00:25:32.072 Got JSON-RPC error response 00:25:32.072 GoRPCClient: error on JSON-RPC call 00:25:32.072 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:25:32.072 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:32.072 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:32.072 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:32.072 13:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:32.072 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.072 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 94648 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 94648 ']' 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 94648 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94648 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:32.347 killing process with pid 94648 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94648' 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 94648 00:25:32.347 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 94648 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:32.915 rmmod nvme_tcp 00:25:32.915 rmmod nvme_fabrics 00:25:32.915 rmmod nvme_keyring 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 94604 ']' 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 94604 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 94604 ']' 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 94604 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94604 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:32.915 killing process with pid 94604 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94604' 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 94604 00:25:32.915 13:43:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 94604 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5Tq /tmp/spdk.key-sha256.dsZ /tmp/spdk.key-sha384.dYp /tmp/spdk.key-sha512.sRk /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:25:33.174 00:25:33.174 real 2m48.115s 00:25:33.174 user 6m47.253s 00:25:33.174 sys 0m21.949s 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:33.174 ************************************ 00:25:33.174 END TEST nvmf_auth_target 00:25:33.174 13:43:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:25:33.174 ************************************ 00:25:33.174 13:43:46 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:25:33.174 13:43:46 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:33.174 13:43:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:33.174 13:43:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:33.174 13:43:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:33.174 ************************************ 00:25:33.174 START TEST nvmf_bdevio_no_huge 00:25:33.174 ************************************ 00:25:33.174 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:25:33.432 * Looking for test storage... 00:25:33.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:33.432 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:33.433 Cannot find device "nvmf_tgt_br" 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:33.433 Cannot find device "nvmf_tgt_br2" 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:33.433 Cannot find device "nvmf_tgt_br" 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:33.433 Cannot find device "nvmf_tgt_br2" 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:33.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:33.433 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:33.433 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:33.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:25:33.691 00:25:33.691 --- 10.0.0.2 ping statistics --- 00:25:33.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.691 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:33.691 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:33.691 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:25:33.691 00:25:33.691 --- 10.0.0.3 ping statistics --- 00:25:33.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.691 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:33.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:25:33.691 00:25:33.691 --- 10.0.0.1 ping statistics --- 00:25:33.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.691 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=99627 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 99627 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 99627 ']' 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:33.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:33.691 13:43:46 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:33.691 [2024-05-15 13:43:46.744816] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:33.691 [2024-05-15 13:43:46.744909] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:25:33.949 [2024-05-15 13:43:46.884682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:33.949 [2024-05-15 13:43:46.887806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.949 [2024-05-15 13:43:47.000782] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.949 [2024-05-15 13:43:47.000831] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.949 [2024-05-15 13:43:47.000843] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.949 [2024-05-15 13:43:47.000851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.949 [2024-05-15 13:43:47.000859] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.949 [2024-05-15 13:43:47.001064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:33.949 [2024-05-15 13:43:47.001107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:25:33.949 [2024-05-15 13:43:47.001247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:25:33.949 [2024-05-15 13:43:47.001250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:34.882 [2024-05-15 13:43:47.899631] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:34.882 Malloc0 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:34.882 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:34.883 [2024-05-15 13:43:47.937963] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:34.883 [2024-05-15 13:43:47.938248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:34.883 { 00:25:34.883 "params": { 00:25:34.883 "name": "Nvme$subsystem", 00:25:34.883 "trtype": "$TEST_TRANSPORT", 00:25:34.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:34.883 "adrfam": "ipv4", 00:25:34.883 "trsvcid": "$NVMF_PORT", 00:25:34.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:34.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:34.883 "hdgst": ${hdgst:-false}, 00:25:34.883 "ddgst": ${ddgst:-false} 00:25:34.883 }, 00:25:34.883 "method": "bdev_nvme_attach_controller" 00:25:34.883 } 00:25:34.883 EOF 00:25:34.883 )") 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:25:34.883 13:43:47 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:34.883 "params": { 00:25:34.883 "name": "Nvme1", 00:25:34.883 "trtype": "tcp", 00:25:34.883 "traddr": "10.0.0.2", 00:25:34.883 "adrfam": "ipv4", 00:25:34.883 "trsvcid": "4420", 00:25:34.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:34.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:34.883 "hdgst": false, 00:25:34.883 "ddgst": false 00:25:34.883 }, 00:25:34.883 "method": "bdev_nvme_attach_controller" 00:25:34.883 }' 00:25:35.139 [2024-05-15 13:43:47.990098] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:35.139 [2024-05-15 13:43:47.990180] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid99681 ] 00:25:35.139 [2024-05-15 13:43:48.121651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:35.139 [2024-05-15 13:43:48.123993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:35.396 [2024-05-15 13:43:48.260317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:35.396 [2024-05-15 13:43:48.260516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:35.396 [2024-05-15 13:43:48.260522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.396 I/O targets: 00:25:35.396 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:25:35.396 00:25:35.396 00:25:35.396 CUnit - A unit testing framework for C - Version 2.1-3 00:25:35.396 http://cunit.sourceforge.net/ 00:25:35.396 00:25:35.396 00:25:35.396 Suite: bdevio tests on: Nvme1n1 00:25:35.396 Test: blockdev write read block ...passed 00:25:35.657 Test: blockdev write zeroes read block ...passed 00:25:35.657 Test: blockdev write zeroes read no split ...passed 00:25:35.657 Test: blockdev write zeroes read split ...passed 00:25:35.657 Test: blockdev write zeroes read split partial ...passed 00:25:35.657 Test: blockdev reset ...[2024-05-15 13:43:48.617491] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:35.657 [2024-05-15 13:43:48.617638] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x4d64d0 (9): Bad file descriptor 00:25:35.657 [2024-05-15 13:43:48.630853] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:35.657 passed 00:25:35.657 Test: blockdev write read 8 blocks ...passed 00:25:35.657 Test: blockdev write read size > 128k ...passed 00:25:35.657 Test: blockdev write read invalid size ...passed 00:25:35.657 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:25:35.657 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:25:35.657 Test: blockdev write read max offset ...passed 00:25:35.913 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:25:35.913 Test: blockdev writev readv 8 blocks ...passed 00:25:35.913 Test: blockdev writev readv 30 x 1block ...passed 00:25:35.913 Test: blockdev writev readv block ...passed 00:25:35.913 Test: blockdev writev readv size > 128k ...passed 00:25:35.913 Test: blockdev writev readv size > 128k in two iovs ...passed 00:25:35.914 Test: blockdev comparev and writev ...[2024-05-15 13:43:48.808237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.808316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.808365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.808377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.808702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.808721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.808739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.808749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.809064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.809081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.809098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.809108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.809400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.809417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.809433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:25:35.914 [2024-05-15 13:43:48.809444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:35.914 passed 00:25:35.914 Test: blockdev nvme passthru rw ...passed 00:25:35.914 Test: blockdev nvme passthru vendor specific ...[2024-05-15 13:43:48.894016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:35.914 [2024-05-15 13:43:48.894071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.894197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:35.914 [2024-05-15 13:43:48.894214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.894349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:35.914 [2024-05-15 13:43:48.894366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:35.914 [2024-05-15 13:43:48.894492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:35.914 [2024-05-15 13:43:48.894507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:35.914 passed 00:25:35.914 Test: blockdev nvme admin passthru ...passed 00:25:35.914 Test: blockdev copy ...passed 00:25:35.914 00:25:35.914 Run Summary: Type Total Ran Passed Failed Inactive 00:25:35.914 suites 1 1 n/a 0 0 00:25:35.914 tests 23 23 23 0 0 00:25:35.914 asserts 152 152 152 0 n/a 00:25:35.914 00:25:35.914 Elapsed time = 1.013 seconds 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:36.476 rmmod nvme_tcp 00:25:36.476 rmmod nvme_fabrics 00:25:36.476 rmmod nvme_keyring 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 99627 ']' 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 99627 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 99627 ']' 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 99627 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99627 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:25:36.476 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:25:36.477 killing process with pid 99627 00:25:36.477 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99627' 00:25:36.477 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 99627 00:25:36.477 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 99627 00:25:36.477 [2024-05-15 13:43:49.396834] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:36.733 00:25:36.733 real 0m3.599s 00:25:36.733 user 0m13.091s 00:25:36.733 sys 0m1.326s 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:36.733 ************************************ 00:25:36.733 END TEST nvmf_bdevio_no_huge 00:25:36.733 ************************************ 00:25:36.733 13:43:49 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:25:36.991 13:43:49 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:36.991 13:43:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:36.991 13:43:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:36.991 13:43:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:36.991 ************************************ 00:25:36.991 START TEST nvmf_tls 00:25:36.991 ************************************ 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:25:36.991 * Looking for test storage... 00:25:36.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:36.991 13:43:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:36.991 Cannot find device "nvmf_tgt_br" 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:36.991 Cannot find device "nvmf_tgt_br2" 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:36.991 Cannot find device "nvmf_tgt_br" 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:36.991 Cannot find device "nvmf_tgt_br2" 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:25:36.991 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:37.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:37.249 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:37.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:25:37.249 00:25:37.249 --- 10.0.0.2 ping statistics --- 00:25:37.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.249 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:37.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:37.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:25:37.249 00:25:37.249 --- 10.0.0.3 ping statistics --- 00:25:37.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.249 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:25:37.249 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:37.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:25:37.507 00:25:37.507 --- 10.0.0.1 ping statistics --- 00:25:37.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.507 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=99866 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 99866 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 99866 ']' 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:37.507 13:43:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:37.507 [2024-05-15 13:43:50.429808] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:37.508 [2024-05-15 13:43:50.429901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.508 [2024-05-15 13:43:50.551651] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:37.508 [2024-05-15 13:43:50.563337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.765 [2024-05-15 13:43:50.656926] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.765 [2024-05-15 13:43:50.656980] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.765 [2024-05-15 13:43:50.656992] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.765 [2024-05-15 13:43:50.657000] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.765 [2024-05-15 13:43:50.657007] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.765 [2024-05-15 13:43:50.657033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.330 13:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:38.330 13:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:38.330 13:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:38.330 13:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.330 13:43:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:38.587 13:43:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.587 13:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:25:38.587 13:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:25:38.587 true 00:25:38.845 13:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:38.845 13:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:25:39.102 13:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:25:39.102 13:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:25:39.102 13:43:51 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:39.102 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:39.102 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:25:39.359 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:25:39.359 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:25:39.359 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:25:39.616 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:39.616 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:25:39.874 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:25:39.874 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:25:39.874 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:39.874 13:43:52 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:25:40.132 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:25:40.132 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:25:40.132 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:25:40.394 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:40.394 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:25:40.651 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:25:40.651 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:25:40.651 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:25:40.909 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:25:40.910 13:43:53 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.2gb0laY3JA 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ctqFXkYsxW 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.2gb0laY3JA 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ctqFXkYsxW 00:25:41.239 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:25:41.497 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:25:42.064 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.2gb0laY3JA 00:25:42.064 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.2gb0laY3JA 00:25:42.064 13:43:54 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:42.064 [2024-05-15 13:43:55.145462] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.323 13:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:42.581 13:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:42.581 [2024-05-15 13:43:55.653542] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:42.581 [2024-05-15 13:43:55.653665] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:42.581 [2024-05-15 13:43:55.653869] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.581 13:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:42.839 malloc0 00:25:42.839 13:43:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:43.097 13:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gb0laY3JA 00:25:43.356 [2024-05-15 13:43:56.370065] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:43.356 13:43:56 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.2gb0laY3JA 00:25:55.552 Initializing NVMe Controllers 00:25:55.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:55.552 Initialization complete. Launching workers. 00:25:55.552 ======================================================== 00:25:55.552 Latency(us) 00:25:55.552 Device Information : IOPS MiB/s Average min max 00:25:55.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9032.36 35.28 7087.26 1772.37 9311.71 00:25:55.552 ======================================================== 00:25:55.552 Total : 9032.36 35.28 7087.26 1772.37 9311.71 00:25:55.552 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gb0laY3JA 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2gb0laY3JA' 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100223 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100223 /var/tmp/bdevperf.sock 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100223 ']' 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:55.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:55.552 13:44:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:55.552 [2024-05-15 13:44:06.639378] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:25:55.552 [2024-05-15 13:44:06.639741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100223 ] 00:25:55.552 [2024-05-15 13:44:06.764769] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:55.552 [2024-05-15 13:44:06.779739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.552 [2024-05-15 13:44:06.918982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.552 13:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:55.552 13:44:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:25:55.552 13:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gb0laY3JA 00:25:55.552 [2024-05-15 13:44:07.895591] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:55.552 [2024-05-15 13:44:07.895787] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:55.552 TLSTESTn1 00:25:55.552 13:44:07 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:55.552 Running I/O for 10 seconds... 00:26:05.608 00:26:05.608 Latency(us) 00:26:05.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.608 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:05.608 Verification LBA range: start 0x0 length 0x2000 00:26:05.608 TLSTESTn1 : 10.03 3600.14 14.06 0.00 0.00 35475.90 8043.05 25380.31 00:26:05.608 =================================================================================================================== 00:26:05.608 Total : 3600.14 14.06 0.00 0.00 35475.90 8043.05 25380.31 00:26:05.608 0 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100223 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100223 ']' 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100223 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100223 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100223' 00:26:05.608 killing process with pid 100223 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100223 00:26:05.608 Received shutdown signal, test time was about 10.000000 seconds 00:26:05.608 00:26:05.608 Latency(us) 00:26:05.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.608 =================================================================================================================== 00:26:05.608 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:05.608 [2024-05-15 13:44:18.196657] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100223 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ctqFXkYsxW 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ctqFXkYsxW 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ctqFXkYsxW 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ctqFXkYsxW' 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100375 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100375 /var/tmp/bdevperf.sock 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100375 ']' 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:05.608 13:44:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:05.608 [2024-05-15 13:44:18.572542] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:05.608 [2024-05-15 13:44:18.572714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100375 ] 00:26:05.608 [2024-05-15 13:44:18.700102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:05.867 [2024-05-15 13:44:18.714535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.867 [2024-05-15 13:44:18.826268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ctqFXkYsxW 00:26:06.802 [2024-05-15 13:44:19.855299] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:06.802 [2024-05-15 13:44:19.855448] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:06.802 [2024-05-15 13:44:19.865032] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:06.802 [2024-05-15 13:44:19.865082] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90670 (107): Transport endpoint is not connected 00:26:06.802 [2024-05-15 13:44:19.866065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd90670 (9): Bad file descriptor 00:26:06.802 [2024-05-15 13:44:19.867061] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.802 [2024-05-15 13:44:19.867085] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:06.802 [2024-05-15 13:44:19.867101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.802 2024/05/15 13:44:19 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.ctqFXkYsxW subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:06.802 request: 00:26:06.802 { 00:26:06.802 "method": "bdev_nvme_attach_controller", 00:26:06.802 "params": { 00:26:06.802 "name": "TLSTEST", 00:26:06.802 "trtype": "tcp", 00:26:06.802 "traddr": "10.0.0.2", 00:26:06.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:06.802 "adrfam": "ipv4", 00:26:06.802 "trsvcid": "4420", 00:26:06.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:06.802 "psk": "/tmp/tmp.ctqFXkYsxW" 00:26:06.802 } 00:26:06.802 } 00:26:06.802 Got JSON-RPC error response 00:26:06.802 GoRPCClient: error on JSON-RPC call 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100375 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100375 ']' 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100375 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:06.802 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:07.060 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100375 00:26:07.060 killing process with pid 100375 00:26:07.060 Received shutdown signal, test time was about 10.000000 seconds 00:26:07.060 00:26:07.060 Latency(us) 00:26:07.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.060 =================================================================================================================== 00:26:07.060 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:07.060 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:07.060 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:07.060 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100375' 00:26:07.060 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100375 00:26:07.060 13:44:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100375 00:26:07.060 [2024-05-15 13:44:19.917265] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2gb0laY3JA 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2gb0laY3JA 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.2gb0laY3JA 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:07.060 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2gb0laY3JA' 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100421 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100421 /var/tmp/bdevperf.sock 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100421 ']' 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:07.061 13:44:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:07.319 [2024-05-15 13:44:20.177431] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:07.319 [2024-05-15 13:44:20.177808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100421 ] 00:26:07.319 [2024-05-15 13:44:20.296542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:07.319 [2024-05-15 13:44:20.312168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.319 [2024-05-15 13:44:20.412760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.251 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:08.251 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:08.251 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.2gb0laY3JA 00:26:08.508 [2024-05-15 13:44:21.457998] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:08.508 [2024-05-15 13:44:21.458150] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:08.508 [2024-05-15 13:44:21.465162] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:26:08.508 [2024-05-15 13:44:21.465215] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:26:08.508 [2024-05-15 13:44:21.465273] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:08.508 [2024-05-15 13:44:21.465816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a4670 (107): Transport endpoint is not connected 00:26:08.508 [2024-05-15 13:44:21.466803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a4670 (9): Bad file descriptor 00:26:08.508 [2024-05-15 13:44:21.467799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.508 [2024-05-15 13:44:21.467821] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:08.508 [2024-05-15 13:44:21.467836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.508 2024/05/15 13:44:21 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.2gb0laY3JA subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:08.508 request: 00:26:08.508 { 00:26:08.508 "method": "bdev_nvme_attach_controller", 00:26:08.508 "params": { 00:26:08.508 "name": "TLSTEST", 00:26:08.508 "trtype": "tcp", 00:26:08.508 "traddr": "10.0.0.2", 00:26:08.508 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:08.508 "adrfam": "ipv4", 00:26:08.508 "trsvcid": "4420", 00:26:08.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.508 "psk": "/tmp/tmp.2gb0laY3JA" 00:26:08.508 } 00:26:08.508 } 00:26:08.508 Got JSON-RPC error response 00:26:08.508 GoRPCClient: error on JSON-RPC call 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100421 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100421 ']' 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100421 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100421 00:26:08.508 killing process with pid 100421 00:26:08.508 Received shutdown signal, test time was about 10.000000 seconds 00:26:08.508 00:26:08.508 Latency(us) 00:26:08.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.508 =================================================================================================================== 00:26:08.508 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100421' 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100421 00:26:08.508 [2024-05-15 13:44:21.521027] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:08.508 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100421 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gb0laY3JA 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gb0laY3JA 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:26:08.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.2gb0laY3JA 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.2gb0laY3JA' 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100465 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100465 /var/tmp/bdevperf.sock 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100465 ']' 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:08.766 13:44:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:08.766 [2024-05-15 13:44:21.783497] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:08.766 [2024-05-15 13:44:21.783597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100465 ] 00:26:09.025 [2024-05-15 13:44:21.902156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:09.025 [2024-05-15 13:44:21.918685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.025 [2024-05-15 13:44:22.018225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.956 13:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:09.956 13:44:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:09.957 13:44:22 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2gb0laY3JA 00:26:10.214 [2024-05-15 13:44:23.070857] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:10.214 [2024-05-15 13:44:23.071016] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:10.214 [2024-05-15 13:44:23.076034] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:26:10.214 [2024-05-15 13:44:23.076105] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:26:10.214 [2024-05-15 13:44:23.076198] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:10.214 [2024-05-15 13:44:23.076706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1268670 (107): Transport endpoint is not connected 00:26:10.214 [2024-05-15 13:44:23.077688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1268670 (9): Bad file descriptor 00:26:10.214 [2024-05-15 13:44:23.078684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:26:10.214 [2024-05-15 13:44:23.078710] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:10.214 [2024-05-15 13:44:23.078725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:26:10.214 2024/05/15 13:44:23 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.2gb0laY3JA subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:10.214 request: 00:26:10.214 { 00:26:10.214 "method": "bdev_nvme_attach_controller", 00:26:10.214 "params": { 00:26:10.214 "name": "TLSTEST", 00:26:10.214 "trtype": "tcp", 00:26:10.214 "traddr": "10.0.0.2", 00:26:10.214 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:10.214 "adrfam": "ipv4", 00:26:10.214 "trsvcid": "4420", 00:26:10.214 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:10.214 "psk": "/tmp/tmp.2gb0laY3JA" 00:26:10.214 } 00:26:10.214 } 00:26:10.214 Got JSON-RPC error response 00:26:10.214 GoRPCClient: error on JSON-RPC call 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100465 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100465 ']' 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100465 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100465 00:26:10.214 killing process with pid 100465 00:26:10.214 Received shutdown signal, test time was about 10.000000 seconds 00:26:10.214 00:26:10.214 Latency(us) 00:26:10.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.214 =================================================================================================================== 00:26:10.214 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100465' 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100465 00:26:10.214 [2024-05-15 13:44:23.131461] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:10.214 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100465 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100512 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100512 /var/tmp/bdevperf.sock 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100512 ']' 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:10.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:10.471 13:44:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:10.471 [2024-05-15 13:44:23.408380] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:10.471 [2024-05-15 13:44:23.408536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100512 ] 00:26:10.471 [2024-05-15 13:44:23.533152] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:10.471 [2024-05-15 13:44:23.550760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.728 [2024-05-15 13:44:23.650884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.658 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:11.658 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:11.658 13:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:11.658 [2024-05-15 13:44:24.706066] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:11.658 [2024-05-15 13:44:24.708099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb88330 (9): Bad file descriptor 00:26:11.659 [2024-05-15 13:44:24.709087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.659 [2024-05-15 13:44:24.709111] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:26:11.659 [2024-05-15 13:44:24.709137] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.659 2024/05/15 13:44:24 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:26:11.659 request: 00:26:11.659 { 00:26:11.659 "method": "bdev_nvme_attach_controller", 00:26:11.659 "params": { 00:26:11.659 "name": "TLSTEST", 00:26:11.659 "trtype": "tcp", 00:26:11.659 "traddr": "10.0.0.2", 00:26:11.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:11.659 "adrfam": "ipv4", 00:26:11.659 "trsvcid": "4420", 00:26:11.659 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:26:11.659 } 00:26:11.659 } 00:26:11.659 Got JSON-RPC error response 00:26:11.659 GoRPCClient: error on JSON-RPC call 00:26:11.659 13:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100512 00:26:11.659 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100512 ']' 00:26:11.659 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100512 00:26:11.659 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:11.659 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:11.659 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100512 00:26:11.917 killing process with pid 100512 00:26:11.917 Received shutdown signal, test time was about 10.000000 seconds 00:26:11.917 00:26:11.917 Latency(us) 00:26:11.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.917 =================================================================================================================== 00:26:11.917 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100512' 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100512 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100512 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 99866 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 99866 ']' 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 99866 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:11.917 13:44:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99866 00:26:11.917 killing process with pid 99866 00:26:11.917 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:11.917 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:11.917 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99866' 00:26:11.917 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 99866 00:26:11.917 [2024-05-15 13:44:25.010347] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:11.917 [2024-05-15 13:44:25.010395] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:11.917 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 99866 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9pftivPFwd 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9pftivPFwd 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100572 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100572 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100572 ']' 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:12.482 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.483 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:12.483 13:44:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:12.483 [2024-05-15 13:44:25.449680] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:12.483 [2024-05-15 13:44:25.449787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.483 [2024-05-15 13:44:25.570484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:12.740 [2024-05-15 13:44:25.585098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.740 [2024-05-15 13:44:25.704192] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.740 [2024-05-15 13:44:25.704261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.740 [2024-05-15 13:44:25.704291] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.740 [2024-05-15 13:44:25.704299] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.740 [2024-05-15 13:44:25.704307] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.740 [2024-05-15 13:44:25.704343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9pftivPFwd 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pftivPFwd 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:13.698 [2024-05-15 13:44:26.718546] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.698 13:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:13.956 13:44:26 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:14.214 [2024-05-15 13:44:27.234651] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:14.214 [2024-05-15 13:44:27.234785] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:14.214 [2024-05-15 13:44:27.234999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.214 13:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:14.472 malloc0 00:26:14.472 13:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:14.729 13:44:27 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pftivPFwd 00:26:14.987 [2024-05-15 13:44:27.990430] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pftivPFwd 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9pftivPFwd' 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100671 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100671 /var/tmp/bdevperf.sock 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100671 ']' 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:14.987 13:44:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:14.988 [2024-05-15 13:44:28.068725] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:14.988 [2024-05-15 13:44:28.068882] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100671 ] 00:26:15.251 [2024-05-15 13:44:28.188341] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:15.251 [2024-05-15 13:44:28.202655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.251 [2024-05-15 13:44:28.318242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.183 13:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:16.183 13:44:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:16.183 13:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pftivPFwd 00:26:16.183 [2024-05-15 13:44:29.281027] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:16.183 [2024-05-15 13:44:29.281184] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:16.441 TLSTESTn1 00:26:16.441 13:44:29 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:16.441 Running I/O for 10 seconds... 00:26:26.429 00:26:26.429 Latency(us) 00:26:26.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.429 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:26.429 Verification LBA range: start 0x0 length 0x2000 00:26:26.429 TLSTESTn1 : 10.04 3073.73 12.01 0.00 0.00 41551.76 10485.76 30384.87 00:26:26.429 =================================================================================================================== 00:26:26.429 Total : 3073.73 12.01 0.00 0.00 41551.76 10485.76 30384.87 00:26:26.429 0 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 100671 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100671 ']' 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100671 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100671 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:26.687 killing process with pid 100671 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100671' 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100671 00:26:26.687 Received shutdown signal, test time was about 10.000000 seconds 00:26:26.687 00:26:26.687 Latency(us) 00:26:26.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:26.687 =================================================================================================================== 00:26:26.687 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:26.687 [2024-05-15 13:44:39.558368] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100671 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9pftivPFwd 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pftivPFwd 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pftivPFwd 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:26.687 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9pftivPFwd 00:26:26.688 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:26:26.688 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:26:26.688 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:26:26.688 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9pftivPFwd' 00:26:26.688 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=100819 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 100819 /var/tmp/bdevperf.sock 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100819 ']' 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:26.944 13:44:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:26.944 [2024-05-15 13:44:39.836490] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:26.944 [2024-05-15 13:44:39.837544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100819 ] 00:26:26.944 [2024-05-15 13:44:39.957479] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:26.944 [2024-05-15 13:44:39.974506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.204 [2024-05-15 13:44:40.075473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.776 13:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:27.776 13:44:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:27.776 13:44:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pftivPFwd 00:26:28.041 [2024-05-15 13:44:41.091100] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:28.041 [2024-05-15 13:44:41.091205] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:28.041 [2024-05-15 13:44:41.091218] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9pftivPFwd 00:26:28.041 2024/05/15 13:44:41 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.9pftivPFwd subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:26:28.041 request: 00:26:28.041 { 00:26:28.041 "method": "bdev_nvme_attach_controller", 00:26:28.041 "params": { 00:26:28.041 "name": "TLSTEST", 00:26:28.041 "trtype": "tcp", 00:26:28.041 "traddr": "10.0.0.2", 00:26:28.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:28.041 "adrfam": "ipv4", 00:26:28.041 "trsvcid": "4420", 00:26:28.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:28.041 "psk": "/tmp/tmp.9pftivPFwd" 00:26:28.041 } 00:26:28.041 } 00:26:28.041 Got JSON-RPC error response 00:26:28.041 GoRPCClient: error on JSON-RPC call 00:26:28.041 13:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 100819 00:26:28.041 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100819 ']' 00:26:28.041 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100819 00:26:28.041 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:28.041 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:28.041 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100819 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:28.310 killing process with pid 100819 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100819' 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100819 00:26:28.310 Received shutdown signal, test time was about 10.000000 seconds 00:26:28.310 00:26:28.310 Latency(us) 00:26:28.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.310 =================================================================================================================== 00:26:28.310 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100819 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 100572 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100572 ']' 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100572 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100572 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:28.310 killing process with pid 100572 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100572' 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100572 00:26:28.310 [2024-05-15 13:44:41.389221] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:28.310 [2024-05-15 13:44:41.389301] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:28.310 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100572 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100877 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100877 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100877 ']' 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:28.876 13:44:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:28.876 [2024-05-15 13:44:41.785362] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:28.876 [2024-05-15 13:44:41.785463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.876 [2024-05-15 13:44:41.905528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:28.876 [2024-05-15 13:44:41.925022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.143 [2024-05-15 13:44:42.056160] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:29.143 [2024-05-15 13:44:42.056235] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:29.143 [2024-05-15 13:44:42.056260] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:29.143 [2024-05-15 13:44:42.056271] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:29.143 [2024-05-15 13:44:42.056289] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:29.143 [2024-05-15 13:44:42.056327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9pftivPFwd 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9pftivPFwd 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.9pftivPFwd 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pftivPFwd 00:26:30.083 13:44:42 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:30.341 [2024-05-15 13:44:43.182047] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:30.341 13:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:30.598 13:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:30.856 [2024-05-15 13:44:43.770184] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:30.856 [2024-05-15 13:44:43.770323] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:30.856 [2024-05-15 13:44:43.770558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.856 13:44:43 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:31.113 malloc0 00:26:31.113 13:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:31.370 13:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pftivPFwd 00:26:31.628 [2024-05-15 13:44:44.490510] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:26:31.628 [2024-05-15 13:44:44.490565] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:26:31.628 [2024-05-15 13:44:44.490613] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:26:31.628 2024/05/15 13:44:44 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.9pftivPFwd], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:26:31.628 request: 00:26:31.628 { 00:26:31.628 "method": "nvmf_subsystem_add_host", 00:26:31.628 "params": { 00:26:31.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:31.629 "host": "nqn.2016-06.io.spdk:host1", 00:26:31.629 "psk": "/tmp/tmp.9pftivPFwd" 00:26:31.629 } 00:26:31.629 } 00:26:31.629 Got JSON-RPC error response 00:26:31.629 GoRPCClient: error on JSON-RPC call 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 100877 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100877 ']' 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100877 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100877 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:31.629 killing process with pid 100877 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100877' 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100877 00:26:31.629 [2024-05-15 13:44:44.551978] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:31.629 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100877 00:26:31.886 13:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9pftivPFwd 00:26:31.886 13:44:44 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:26:31.886 13:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:31.886 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:31.886 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:31.886 13:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=100988 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 100988 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 100988 ']' 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:31.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:31.887 13:44:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:31.887 [2024-05-15 13:44:44.947204] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:31.887 [2024-05-15 13:44:44.947300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.145 [2024-05-15 13:44:45.066413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:32.145 [2024-05-15 13:44:45.086215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.145 [2024-05-15 13:44:45.214226] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.145 [2024-05-15 13:44:45.214302] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.145 [2024-05-15 13:44:45.214329] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.145 [2024-05-15 13:44:45.214337] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.145 [2024-05-15 13:44:45.214344] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.145 [2024-05-15 13:44:45.214371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:33.079 13:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:33.079 13:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:33.079 13:44:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:33.079 13:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:33.079 13:44:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:33.079 13:44:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:33.079 13:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9pftivPFwd 00:26:33.079 13:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pftivPFwd 00:26:33.079 13:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:33.340 [2024-05-15 13:44:46.267362] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:33.340 13:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:33.598 13:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:33.856 [2024-05-15 13:44:46.875457] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:33.856 [2024-05-15 13:44:46.875582] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:33.856 [2024-05-15 13:44:46.875819] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:33.856 13:44:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:34.113 malloc0 00:26:34.113 13:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:34.372 13:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pftivPFwd 00:26:34.630 [2024-05-15 13:44:47.634761] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=101090 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 101090 /var/tmp/bdevperf.sock 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101090 ']' 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:34.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:34.630 13:44:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:34.630 [2024-05-15 13:44:47.701512] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:34.630 [2024-05-15 13:44:47.701619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101090 ] 00:26:34.888 [2024-05-15 13:44:47.821036] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:34.888 [2024-05-15 13:44:47.844700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.888 [2024-05-15 13:44:47.985844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.821 13:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:35.821 13:44:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:35.821 13:44:48 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pftivPFwd 00:26:36.079 [2024-05-15 13:44:48.972301] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:36.079 [2024-05-15 13:44:48.972470] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:36.079 TLSTESTn1 00:26:36.079 13:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:36.337 13:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:26:36.337 "subsystems": [ 00:26:36.337 { 00:26:36.337 "subsystem": "keyring", 00:26:36.337 "config": [] 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "subsystem": "iobuf", 00:26:36.337 "config": [ 00:26:36.337 { 00:26:36.337 "method": "iobuf_set_options", 00:26:36.337 "params": { 00:26:36.337 "large_bufsize": 135168, 00:26:36.337 "large_pool_count": 1024, 00:26:36.337 "small_bufsize": 8192, 00:26:36.337 "small_pool_count": 8192 00:26:36.337 } 00:26:36.337 } 00:26:36.337 ] 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "subsystem": "sock", 00:26:36.337 "config": [ 00:26:36.337 { 00:26:36.337 "method": "sock_impl_set_options", 00:26:36.337 "params": { 00:26:36.337 "enable_ktls": false, 00:26:36.337 "enable_placement_id": 0, 00:26:36.337 "enable_quickack": false, 00:26:36.337 "enable_recv_pipe": true, 00:26:36.337 "enable_zerocopy_send_client": false, 00:26:36.337 "enable_zerocopy_send_server": true, 00:26:36.337 "impl_name": "posix", 00:26:36.337 "recv_buf_size": 2097152, 00:26:36.337 "send_buf_size": 2097152, 00:26:36.337 "tls_version": 0, 00:26:36.337 "zerocopy_threshold": 0 00:26:36.337 } 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "method": "sock_impl_set_options", 00:26:36.337 "params": { 00:26:36.337 "enable_ktls": false, 00:26:36.337 "enable_placement_id": 0, 00:26:36.337 "enable_quickack": false, 00:26:36.337 "enable_recv_pipe": true, 00:26:36.337 "enable_zerocopy_send_client": false, 00:26:36.337 "enable_zerocopy_send_server": true, 00:26:36.337 "impl_name": "ssl", 00:26:36.337 "recv_buf_size": 4096, 00:26:36.337 "send_buf_size": 4096, 00:26:36.337 "tls_version": 0, 00:26:36.337 "zerocopy_threshold": 0 00:26:36.337 } 00:26:36.337 } 00:26:36.337 ] 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "subsystem": "vmd", 00:26:36.337 "config": [] 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "subsystem": "accel", 00:26:36.337 "config": [ 00:26:36.337 { 00:26:36.337 "method": "accel_set_options", 00:26:36.337 "params": { 00:26:36.337 "buf_count": 2048, 00:26:36.337 "large_cache_size": 16, 00:26:36.337 "sequence_count": 2048, 00:26:36.337 "small_cache_size": 128, 00:26:36.337 "task_count": 2048 00:26:36.337 } 00:26:36.337 } 00:26:36.337 ] 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "subsystem": "bdev", 00:26:36.337 "config": [ 00:26:36.337 { 00:26:36.337 "method": "bdev_set_options", 00:26:36.337 "params": { 00:26:36.337 "bdev_auto_examine": true, 00:26:36.337 "bdev_io_cache_size": 256, 00:26:36.337 "bdev_io_pool_size": 65535, 00:26:36.337 "iobuf_large_cache_size": 16, 00:26:36.337 "iobuf_small_cache_size": 128 00:26:36.337 } 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "method": "bdev_raid_set_options", 00:26:36.337 "params": { 00:26:36.337 "process_window_size_kb": 1024 00:26:36.337 } 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "method": "bdev_iscsi_set_options", 00:26:36.337 "params": { 00:26:36.337 "timeout_sec": 30 00:26:36.337 } 00:26:36.337 }, 00:26:36.337 { 00:26:36.337 "method": "bdev_nvme_set_options", 00:26:36.337 "params": { 00:26:36.337 "action_on_timeout": "none", 00:26:36.337 "allow_accel_sequence": false, 00:26:36.337 "arbitration_burst": 0, 00:26:36.337 "bdev_retry_count": 3, 00:26:36.338 "ctrlr_loss_timeout_sec": 0, 00:26:36.338 "delay_cmd_submit": true, 00:26:36.338 "dhchap_dhgroups": [ 00:26:36.338 "null", 00:26:36.338 "ffdhe2048", 00:26:36.338 "ffdhe3072", 00:26:36.338 "ffdhe4096", 00:26:36.338 "ffdhe6144", 00:26:36.338 "ffdhe8192" 00:26:36.338 ], 00:26:36.338 "dhchap_digests": [ 00:26:36.338 "sha256", 00:26:36.338 "sha384", 00:26:36.338 "sha512" 00:26:36.338 ], 00:26:36.338 "disable_auto_failback": false, 00:26:36.338 "fast_io_fail_timeout_sec": 0, 00:26:36.338 "generate_uuids": false, 00:26:36.338 "high_priority_weight": 0, 00:26:36.338 "io_path_stat": false, 00:26:36.338 "io_queue_requests": 0, 00:26:36.338 "keep_alive_timeout_ms": 10000, 00:26:36.338 "low_priority_weight": 0, 00:26:36.338 "medium_priority_weight": 0, 00:26:36.338 "nvme_adminq_poll_period_us": 10000, 00:26:36.338 "nvme_error_stat": false, 00:26:36.338 "nvme_ioq_poll_period_us": 0, 00:26:36.338 "rdma_cm_event_timeout_ms": 0, 00:26:36.338 "rdma_max_cq_size": 0, 00:26:36.338 "rdma_srq_size": 0, 00:26:36.338 "reconnect_delay_sec": 0, 00:26:36.338 "timeout_admin_us": 0, 00:26:36.338 "timeout_us": 0, 00:26:36.338 "transport_ack_timeout": 0, 00:26:36.338 "transport_retry_count": 4, 00:26:36.338 "transport_tos": 0 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "bdev_nvme_set_hotplug", 00:26:36.338 "params": { 00:26:36.338 "enable": false, 00:26:36.338 "period_us": 100000 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "bdev_malloc_create", 00:26:36.338 "params": { 00:26:36.338 "block_size": 4096, 00:26:36.338 "name": "malloc0", 00:26:36.338 "num_blocks": 8192, 00:26:36.338 "optimal_io_boundary": 0, 00:26:36.338 "physical_block_size": 4096, 00:26:36.338 "uuid": "13baedcd-c03b-48fb-ad63-75b677e984d7" 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "bdev_wait_for_examine" 00:26:36.338 } 00:26:36.338 ] 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "subsystem": "nbd", 00:26:36.338 "config": [] 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "subsystem": "scheduler", 00:26:36.338 "config": [ 00:26:36.338 { 00:26:36.338 "method": "framework_set_scheduler", 00:26:36.338 "params": { 00:26:36.338 "name": "static" 00:26:36.338 } 00:26:36.338 } 00:26:36.338 ] 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "subsystem": "nvmf", 00:26:36.338 "config": [ 00:26:36.338 { 00:26:36.338 "method": "nvmf_set_config", 00:26:36.338 "params": { 00:26:36.338 "admin_cmd_passthru": { 00:26:36.338 "identify_ctrlr": false 00:26:36.338 }, 00:26:36.338 "discovery_filter": "match_any" 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "nvmf_set_max_subsystems", 00:26:36.338 "params": { 00:26:36.338 "max_subsystems": 1024 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "nvmf_set_crdt", 00:26:36.338 "params": { 00:26:36.338 "crdt1": 0, 00:26:36.338 "crdt2": 0, 00:26:36.338 "crdt3": 0 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "nvmf_create_transport", 00:26:36.338 "params": { 00:26:36.338 "abort_timeout_sec": 1, 00:26:36.338 "ack_timeout": 0, 00:26:36.338 "buf_cache_size": 4294967295, 00:26:36.338 "c2h_success": false, 00:26:36.338 "data_wr_pool_size": 0, 00:26:36.338 "dif_insert_or_strip": false, 00:26:36.338 "in_capsule_data_size": 4096, 00:26:36.338 "io_unit_size": 131072, 00:26:36.338 "max_aq_depth": 128, 00:26:36.338 "max_io_qpairs_per_ctrlr": 127, 00:26:36.338 "max_io_size": 131072, 00:26:36.338 "max_queue_depth": 128, 00:26:36.338 "num_shared_buffers": 511, 00:26:36.338 "sock_priority": 0, 00:26:36.338 "trtype": "TCP", 00:26:36.338 "zcopy": false 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "nvmf_create_subsystem", 00:26:36.338 "params": { 00:26:36.338 "allow_any_host": false, 00:26:36.338 "ana_reporting": false, 00:26:36.338 "max_cntlid": 65519, 00:26:36.338 "max_namespaces": 10, 00:26:36.338 "min_cntlid": 1, 00:26:36.338 "model_number": "SPDK bdev Controller", 00:26:36.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.338 "serial_number": "SPDK00000000000001" 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "nvmf_subsystem_add_host", 00:26:36.338 "params": { 00:26:36.338 "host": "nqn.2016-06.io.spdk:host1", 00:26:36.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.338 "psk": "/tmp/tmp.9pftivPFwd" 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "nvmf_subsystem_add_ns", 00:26:36.338 "params": { 00:26:36.338 "namespace": { 00:26:36.338 "bdev_name": "malloc0", 00:26:36.338 "nguid": "13BAEDCDC03B48FBAD6375B677E984D7", 00:26:36.338 "no_auto_visible": false, 00:26:36.338 "nsid": 1, 00:26:36.338 "uuid": "13baedcd-c03b-48fb-ad63-75b677e984d7" 00:26:36.338 }, 00:26:36.338 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:26:36.338 } 00:26:36.338 }, 00:26:36.338 { 00:26:36.338 "method": "nvmf_subsystem_add_listener", 00:26:36.338 "params": { 00:26:36.338 "listen_address": { 00:26:36.338 "adrfam": "IPv4", 00:26:36.338 "traddr": "10.0.0.2", 00:26:36.338 "trsvcid": "4420", 00:26:36.338 "trtype": "TCP" 00:26:36.338 }, 00:26:36.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.338 "secure_channel": true 00:26:36.338 } 00:26:36.338 } 00:26:36.338 ] 00:26:36.338 } 00:26:36.338 ] 00:26:36.338 }' 00:26:36.338 13:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:26:36.905 13:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:26:36.905 "subsystems": [ 00:26:36.905 { 00:26:36.905 "subsystem": "keyring", 00:26:36.905 "config": [] 00:26:36.905 }, 00:26:36.905 { 00:26:36.905 "subsystem": "iobuf", 00:26:36.905 "config": [ 00:26:36.905 { 00:26:36.905 "method": "iobuf_set_options", 00:26:36.905 "params": { 00:26:36.905 "large_bufsize": 135168, 00:26:36.905 "large_pool_count": 1024, 00:26:36.905 "small_bufsize": 8192, 00:26:36.905 "small_pool_count": 8192 00:26:36.905 } 00:26:36.905 } 00:26:36.905 ] 00:26:36.905 }, 00:26:36.905 { 00:26:36.905 "subsystem": "sock", 00:26:36.905 "config": [ 00:26:36.905 { 00:26:36.905 "method": "sock_impl_set_options", 00:26:36.906 "params": { 00:26:36.906 "enable_ktls": false, 00:26:36.906 "enable_placement_id": 0, 00:26:36.906 "enable_quickack": false, 00:26:36.906 "enable_recv_pipe": true, 00:26:36.906 "enable_zerocopy_send_client": false, 00:26:36.906 "enable_zerocopy_send_server": true, 00:26:36.906 "impl_name": "posix", 00:26:36.906 "recv_buf_size": 2097152, 00:26:36.906 "send_buf_size": 2097152, 00:26:36.906 "tls_version": 0, 00:26:36.906 "zerocopy_threshold": 0 00:26:36.906 } 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "method": "sock_impl_set_options", 00:26:36.906 "params": { 00:26:36.906 "enable_ktls": false, 00:26:36.906 "enable_placement_id": 0, 00:26:36.906 "enable_quickack": false, 00:26:36.906 "enable_recv_pipe": true, 00:26:36.906 "enable_zerocopy_send_client": false, 00:26:36.906 "enable_zerocopy_send_server": true, 00:26:36.906 "impl_name": "ssl", 00:26:36.906 "recv_buf_size": 4096, 00:26:36.906 "send_buf_size": 4096, 00:26:36.906 "tls_version": 0, 00:26:36.906 "zerocopy_threshold": 0 00:26:36.906 } 00:26:36.906 } 00:26:36.906 ] 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "subsystem": "vmd", 00:26:36.906 "config": [] 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "subsystem": "accel", 00:26:36.906 "config": [ 00:26:36.906 { 00:26:36.906 "method": "accel_set_options", 00:26:36.906 "params": { 00:26:36.906 "buf_count": 2048, 00:26:36.906 "large_cache_size": 16, 00:26:36.906 "sequence_count": 2048, 00:26:36.906 "small_cache_size": 128, 00:26:36.906 "task_count": 2048 00:26:36.906 } 00:26:36.906 } 00:26:36.906 ] 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "subsystem": "bdev", 00:26:36.906 "config": [ 00:26:36.906 { 00:26:36.906 "method": "bdev_set_options", 00:26:36.906 "params": { 00:26:36.906 "bdev_auto_examine": true, 00:26:36.906 "bdev_io_cache_size": 256, 00:26:36.906 "bdev_io_pool_size": 65535, 00:26:36.906 "iobuf_large_cache_size": 16, 00:26:36.906 "iobuf_small_cache_size": 128 00:26:36.906 } 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "method": "bdev_raid_set_options", 00:26:36.906 "params": { 00:26:36.906 "process_window_size_kb": 1024 00:26:36.906 } 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "method": "bdev_iscsi_set_options", 00:26:36.906 "params": { 00:26:36.906 "timeout_sec": 30 00:26:36.906 } 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "method": "bdev_nvme_set_options", 00:26:36.906 "params": { 00:26:36.906 "action_on_timeout": "none", 00:26:36.906 "allow_accel_sequence": false, 00:26:36.906 "arbitration_burst": 0, 00:26:36.906 "bdev_retry_count": 3, 00:26:36.906 "ctrlr_loss_timeout_sec": 0, 00:26:36.906 "delay_cmd_submit": true, 00:26:36.906 "dhchap_dhgroups": [ 00:26:36.906 "null", 00:26:36.906 "ffdhe2048", 00:26:36.906 "ffdhe3072", 00:26:36.906 "ffdhe4096", 00:26:36.906 "ffdhe6144", 00:26:36.906 "ffdhe8192" 00:26:36.906 ], 00:26:36.906 "dhchap_digests": [ 00:26:36.906 "sha256", 00:26:36.906 "sha384", 00:26:36.906 "sha512" 00:26:36.906 ], 00:26:36.906 "disable_auto_failback": false, 00:26:36.906 "fast_io_fail_timeout_sec": 0, 00:26:36.906 "generate_uuids": false, 00:26:36.906 "high_priority_weight": 0, 00:26:36.906 "io_path_stat": false, 00:26:36.906 "io_queue_requests": 512, 00:26:36.906 "keep_alive_timeout_ms": 10000, 00:26:36.906 "low_priority_weight": 0, 00:26:36.906 "medium_priority_weight": 0, 00:26:36.906 "nvme_adminq_poll_period_us": 10000, 00:26:36.906 "nvme_error_stat": false, 00:26:36.906 "nvme_ioq_poll_period_us": 0, 00:26:36.906 "rdma_cm_event_timeout_ms": 0, 00:26:36.906 "rdma_max_cq_size": 0, 00:26:36.906 "rdma_srq_size": 0, 00:26:36.906 "reconnect_delay_sec": 0, 00:26:36.906 "timeout_admin_us": 0, 00:26:36.906 "timeout_us": 0, 00:26:36.906 "transport_ack_timeout": 0, 00:26:36.906 "transport_retry_count": 4, 00:26:36.906 "transport_tos": 0 00:26:36.906 } 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "method": "bdev_nvme_attach_controller", 00:26:36.906 "params": { 00:26:36.906 "adrfam": "IPv4", 00:26:36.906 "ctrlr_loss_timeout_sec": 0, 00:26:36.906 "ddgst": false, 00:26:36.906 "fast_io_fail_timeout_sec": 0, 00:26:36.906 "hdgst": false, 00:26:36.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.906 "name": "TLSTEST", 00:26:36.906 "prchk_guard": false, 00:26:36.906 "prchk_reftag": false, 00:26:36.906 "psk": "/tmp/tmp.9pftivPFwd", 00:26:36.906 "reconnect_delay_sec": 0, 00:26:36.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.906 "traddr": "10.0.0.2", 00:26:36.906 "trsvcid": "4420", 00:26:36.906 "trtype": "TCP" 00:26:36.906 } 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "method": "bdev_nvme_set_hotplug", 00:26:36.906 "params": { 00:26:36.906 "enable": false, 00:26:36.906 "period_us": 100000 00:26:36.906 } 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "method": "bdev_wait_for_examine" 00:26:36.906 } 00:26:36.906 ] 00:26:36.906 }, 00:26:36.906 { 00:26:36.906 "subsystem": "nbd", 00:26:36.906 "config": [] 00:26:36.906 } 00:26:36.906 ] 00:26:36.906 }' 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 101090 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101090 ']' 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101090 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101090 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:36.906 killing process with pid 101090 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101090' 00:26:36.906 Received shutdown signal, test time was about 10.000000 seconds 00:26:36.906 00:26:36.906 Latency(us) 00:26:36.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.906 =================================================================================================================== 00:26:36.906 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101090 00:26:36.906 [2024-05-15 13:44:49.774149] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:36.906 13:44:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101090 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 100988 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 100988 ']' 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 100988 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 100988 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:37.165 killing process with pid 100988 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 100988' 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 100988 00:26:37.165 [2024-05-15 13:44:50.117715] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:37.165 [2024-05-15 13:44:50.117761] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:37.165 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 100988 00:26:37.423 13:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:26:37.423 13:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:37.423 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:37.423 13:44:50 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:26:37.423 "subsystems": [ 00:26:37.423 { 00:26:37.423 "subsystem": "keyring", 00:26:37.423 "config": [] 00:26:37.423 }, 00:26:37.423 { 00:26:37.423 "subsystem": "iobuf", 00:26:37.423 "config": [ 00:26:37.423 { 00:26:37.423 "method": "iobuf_set_options", 00:26:37.423 "params": { 00:26:37.423 "large_bufsize": 135168, 00:26:37.423 "large_pool_count": 1024, 00:26:37.423 "small_bufsize": 8192, 00:26:37.423 "small_pool_count": 8192 00:26:37.423 } 00:26:37.423 } 00:26:37.423 ] 00:26:37.423 }, 00:26:37.423 { 00:26:37.423 "subsystem": "sock", 00:26:37.423 "config": [ 00:26:37.423 { 00:26:37.423 "method": "sock_impl_set_options", 00:26:37.423 "params": { 00:26:37.423 "enable_ktls": false, 00:26:37.423 "enable_placement_id": 0, 00:26:37.423 "enable_quickack": false, 00:26:37.423 "enable_recv_pipe": true, 00:26:37.423 "enable_zerocopy_send_client": false, 00:26:37.423 "enable_zerocopy_send_server": true, 00:26:37.423 "impl_name": "posix", 00:26:37.423 "recv_buf_size": 2097152, 00:26:37.423 "send_buf_size": 2097152, 00:26:37.423 "tls_version": 0, 00:26:37.423 "zerocopy_threshold": 0 00:26:37.423 } 00:26:37.423 }, 00:26:37.423 { 00:26:37.423 "method": "sock_impl_set_options", 00:26:37.423 "params": { 00:26:37.423 "enable_ktls": false, 00:26:37.423 "enable_placement_id": 0, 00:26:37.423 "enable_quickack": false, 00:26:37.423 "enable_recv_pipe": true, 00:26:37.423 "enable_zerocopy_send_client": false, 00:26:37.423 "enable_zerocopy_send_server": true, 00:26:37.423 "impl_name": "ssl", 00:26:37.423 "recv_buf_size": 4096, 00:26:37.423 "send_buf_size": 4096, 00:26:37.423 "tls_version": 0, 00:26:37.423 "zerocopy_threshold": 0 00:26:37.423 } 00:26:37.423 } 00:26:37.423 ] 00:26:37.423 }, 00:26:37.423 { 00:26:37.423 "subsystem": "vmd", 00:26:37.423 "config": [] 00:26:37.423 }, 00:26:37.423 { 00:26:37.423 "subsystem": "accel", 00:26:37.423 "config": [ 00:26:37.423 { 00:26:37.423 "method": "accel_set_options", 00:26:37.424 "params": { 00:26:37.424 "buf_count": 2048, 00:26:37.424 "large_cache_size": 16, 00:26:37.424 "sequence_count": 2048, 00:26:37.424 "small_cache_size": 128, 00:26:37.424 "task_count": 2048 00:26:37.424 } 00:26:37.424 } 00:26:37.424 ] 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "subsystem": "bdev", 00:26:37.424 "config": [ 00:26:37.424 { 00:26:37.424 "method": "bdev_set_options", 00:26:37.424 "params": { 00:26:37.424 "bdev_auto_examine": true, 00:26:37.424 "bdev_io_cache_size": 256, 00:26:37.424 "bdev_io_pool_size": 65535, 00:26:37.424 "iobuf_large_cache_size": 16, 00:26:37.424 "iobuf_small_cache_size": 128 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "bdev_raid_set_options", 00:26:37.424 "params": { 00:26:37.424 "process_window_size_kb": 1024 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "bdev_iscsi_set_options", 00:26:37.424 "params": { 00:26:37.424 "timeout_sec": 30 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "bdev_nvme_set_options", 00:26:37.424 "params": { 00:26:37.424 "action_on_timeout": "none", 00:26:37.424 "allow_accel_sequence": false, 00:26:37.424 "arbitration_burst": 0, 00:26:37.424 "bdev_retry_count": 3, 00:26:37.424 "ctrlr_loss_timeout_sec": 0, 00:26:37.424 "delay_cmd_submit": true, 00:26:37.424 "dhchap_dhgroups": [ 00:26:37.424 "null", 00:26:37.424 "ffdhe2048", 00:26:37.424 "ffdhe3072", 00:26:37.424 "ffdhe4096", 00:26:37.424 "ffdhe6144", 00:26:37.424 "ffdhe8192" 00:26:37.424 ], 00:26:37.424 "dhchap_digests": [ 00:26:37.424 "sha256", 00:26:37.424 "sha384", 00:26:37.424 "sha512" 00:26:37.424 ], 00:26:37.424 "disable_auto_failback": false, 00:26:37.424 "fast_io_fail_timeout_sec": 0, 00:26:37.424 "generate_uuids": false, 00:26:37.424 "high_priority_weight": 0, 00:26:37.424 "io_path_stat": false, 00:26:37.424 "io_queue_requests": 0, 00:26:37.424 "keep_alive_timeout_ms": 10000, 00:26:37.424 "low_priority_weight": 0, 00:26:37.424 "medium_priority_weight": 0, 00:26:37.424 "nvme_adminq_poll_period_us": 10000, 00:26:37.424 "nvme_error_stat": false, 00:26:37.424 "nvme_ioq_poll_period_us": 0, 00:26:37.424 "rdma_cm_event_timeout_ms": 0, 00:26:37.424 "rdma_max_cq_size": 0, 00:26:37.424 "rdma_srq_size": 0, 00:26:37.424 "reconnect_delay_sec": 0, 00:26:37.424 "timeout_admin_us": 0, 00:26:37.424 "timeout_us": 0, 00:26:37.424 "transport_ack_timeout": 0, 00:26:37.424 "transport_retry_count": 4, 00:26:37.424 "transport_tos": 0 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "bdev_nvme_set_hotplug", 00:26:37.424 "params": { 00:26:37.424 "enable": false, 00:26:37.424 "period_us": 100000 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "bdev_malloc_create", 00:26:37.424 "params": { 00:26:37.424 "block_size": 4096, 00:26:37.424 "name": "malloc0", 00:26:37.424 "num_blocks": 8192, 00:26:37.424 "optimal_io_boundary": 0, 00:26:37.424 "physical_block_size": 4096, 00:26:37.424 "uuid": "13baedcd-c03b-48fb-ad63-75b677e984d7" 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "bdev_wait_for_examine" 00:26:37.424 } 00:26:37.424 ] 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "subsystem": "nbd", 00:26:37.424 "config": [] 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "subsystem": "scheduler", 00:26:37.424 "config": [ 00:26:37.424 { 00:26:37.424 "method": "framework_set_scheduler", 00:26:37.424 "params": { 00:26:37.424 "name": "static" 00:26:37.424 } 00:26:37.424 } 00:26:37.424 ] 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "subsystem": "nvmf", 00:26:37.424 "config": [ 00:26:37.424 { 00:26:37.424 "method": "nvmf_set_config", 00:26:37.424 "params": { 00:26:37.424 "admin_cmd_passthru": { 00:26:37.424 "identify_ctrlr": false 00:26:37.424 }, 00:26:37.424 "discovery_filter": "match_any" 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "nvmf_set_max_subsystems", 00:26:37.424 "params": { 00:26:37.424 "max_subsystems": 1024 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "nvmf_set_crdt", 00:26:37.424 "params": { 00:26:37.424 "crdt1": 0, 00:26:37.424 "crdt2": 0, 00:26:37.424 "crdt3": 0 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "nvmf_create_transport", 00:26:37.424 "params": { 00:26:37.424 "abort_timeout_sec": 1, 00:26:37.424 "ack_timeout": 0, 00:26:37.424 "buf_cache_size": 4294967295, 00:26:37.424 "c2h_success": false, 00:26:37.424 "data_wr_pool_size": 0, 00:26:37.424 "dif_insert_or_strip": false, 00:26:37.424 "in_capsule_data_size": 4096, 00:26:37.424 "io_unit_size": 131072, 00:26:37.424 "max_aq_depth": 128, 00:26:37.424 "max_io_qpairs_per_ctrlr": 127, 00:26:37.424 "max_io_size": 131072, 00:26:37.424 "max_queue_depth": 128, 00:26:37.424 "num_shared_buffers": 511, 00:26:37.424 "sock_priority": 0, 00:26:37.424 "trtype": "TCP", 00:26:37.424 "zcopy": false 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "nvmf_create_subsystem", 00:26:37.424 "params": { 00:26:37.424 "allow_any_host": false, 00:26:37.424 "ana_reporting": false, 00:26:37.424 "max_cntlid": 65519, 00:26:37.424 "max_namespaces": 10, 00:26:37.424 "min_cntlid": 1, 00:26:37.424 "model_number": "SPDK bdev Controller", 00:26:37.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.424 "serial_number": "SPDK00000000000001" 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "nvmf_subsystem_add_host", 00:26:37.424 "params": { 00:26:37.424 "host": "nqn.2016-06.io.spdk:host1", 00:26:37.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.424 "psk": "/tmp/tmp.9pftivPFwd" 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "nvmf_subsystem_add_ns", 00:26:37.424 "params": { 00:26:37.424 "namespace": { 00:26:37.424 "bdev_name": "malloc0", 00:26:37.424 "nguid": "13BAEDCDC03B48FBAD6375B677E984D7", 00:26:37.424 "no_auto_visible": false, 00:26:37.424 "nsid": 1, 00:26:37.424 "uuid": "13baedcd-c03b-48fb-ad63-75b677e984d7" 00:26:37.424 }, 00:26:37.424 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:26:37.424 } 00:26:37.424 }, 00:26:37.424 { 00:26:37.424 "method": "nvmf_subsystem_add_listener", 00:26:37.424 "params": { 00:26:37.424 "listen_address": { 00:26:37.424 "adrfam": "IPv4", 00:26:37.424 "traddr": "10.0.0.2", 00:26:37.424 "trsvcid": "4420", 00:26:37.424 "trtype": "TCP" 00:26:37.424 }, 00:26:37.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:37.424 "secure_channel": true 00:26:37.424 } 00:26:37.424 } 00:26:37.424 ] 00:26:37.424 } 00:26:37.424 ] 00:26:37.424 }' 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101170 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101170 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101170 ']' 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:37.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:37.424 13:44:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:37.424 [2024-05-15 13:44:50.452516] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:37.424 [2024-05-15 13:44:50.452647] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:37.683 [2024-05-15 13:44:50.578637] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:37.683 [2024-05-15 13:44:50.590748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.683 [2024-05-15 13:44:50.685884] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:37.683 [2024-05-15 13:44:50.685939] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:37.683 [2024-05-15 13:44:50.685974] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:37.683 [2024-05-15 13:44:50.685982] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:37.683 [2024-05-15 13:44:50.685989] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:37.683 [2024-05-15 13:44:50.686075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:37.941 [2024-05-15 13:44:50.906775] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:37.941 [2024-05-15 13:44:50.922707] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:37.941 [2024-05-15 13:44:50.938702] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:37.941 [2024-05-15 13:44:50.938816] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:37.941 [2024-05-15 13:44:50.939016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.507 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=101215 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 101215 /var/tmp/bdevperf.sock 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101215 ']' 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:26:38.508 13:44:51 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:26:38.508 "subsystems": [ 00:26:38.508 { 00:26:38.508 "subsystem": "keyring", 00:26:38.508 "config": [] 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "subsystem": "iobuf", 00:26:38.508 "config": [ 00:26:38.508 { 00:26:38.508 "method": "iobuf_set_options", 00:26:38.508 "params": { 00:26:38.508 "large_bufsize": 135168, 00:26:38.508 "large_pool_count": 1024, 00:26:38.508 "small_bufsize": 8192, 00:26:38.508 "small_pool_count": 8192 00:26:38.508 } 00:26:38.508 } 00:26:38.508 ] 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "subsystem": "sock", 00:26:38.508 "config": [ 00:26:38.508 { 00:26:38.508 "method": "sock_impl_set_options", 00:26:38.508 "params": { 00:26:38.508 "enable_ktls": false, 00:26:38.508 "enable_placement_id": 0, 00:26:38.508 "enable_quickack": false, 00:26:38.508 "enable_recv_pipe": true, 00:26:38.508 "enable_zerocopy_send_client": false, 00:26:38.508 "enable_zerocopy_send_server": true, 00:26:38.508 "impl_name": "posix", 00:26:38.508 "recv_buf_size": 2097152, 00:26:38.508 "send_buf_size": 2097152, 00:26:38.508 "tls_version": 0, 00:26:38.508 "zerocopy_threshold": 0 00:26:38.508 } 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "method": "sock_impl_set_options", 00:26:38.508 "params": { 00:26:38.508 "enable_ktls": false, 00:26:38.508 "enable_placement_id": 0, 00:26:38.508 "enable_quickack": false, 00:26:38.508 "enable_recv_pipe": true, 00:26:38.508 "enable_zerocopy_send_client": false, 00:26:38.508 "enable_zerocopy_send_server": true, 00:26:38.508 "impl_name": "ssl", 00:26:38.508 "recv_buf_size": 4096, 00:26:38.508 "send_buf_size": 4096, 00:26:38.508 "tls_version": 0, 00:26:38.508 "zerocopy_threshold": 0 00:26:38.508 } 00:26:38.508 } 00:26:38.508 ] 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "subsystem": "vmd", 00:26:38.508 "config": [] 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "subsystem": "accel", 00:26:38.508 "config": [ 00:26:38.508 { 00:26:38.508 "method": "accel_set_options", 00:26:38.508 "params": { 00:26:38.508 "buf_count": 2048, 00:26:38.508 "large_cache_size": 16, 00:26:38.508 "sequence_count": 2048, 00:26:38.508 "small_cache_size": 128, 00:26:38.508 "task_count": 2048 00:26:38.508 } 00:26:38.508 } 00:26:38.508 ] 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "subsystem": "bdev", 00:26:38.508 "config": [ 00:26:38.508 { 00:26:38.508 "method": "bdev_set_options", 00:26:38.508 "params": { 00:26:38.508 "bdev_auto_examine": true, 00:26:38.508 "bdev_io_cache_size": 256, 00:26:38.508 "bdev_io_pool_size": 65535, 00:26:38.508 "iobuf_large_cache_size": 16, 00:26:38.508 "iobuf_small_cache_size": 128 00:26:38.508 } 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "method": "bdev_raid_set_options", 00:26:38.508 "params": { 00:26:38.508 "process_window_size_kb": 1024 00:26:38.508 } 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "method": "bdev_iscsi_set_options", 00:26:38.508 "params": { 00:26:38.508 "timeout_sec": 30 00:26:38.508 } 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "method": "bdev_nvme_set_options", 00:26:38.508 "params": { 00:26:38.508 "action_on_timeout": "none", 00:26:38.508 "allow_accel_sequence": false, 00:26:38.508 "arbitration_burst": 0, 00:26:38.508 "bdev_retry_count": 3, 00:26:38.508 "ctrlr_loss_timeout_sec": 0, 00:26:38.508 "delay_cmd_submit": true, 00:26:38.508 "dhchap_dhgroups": [ 00:26:38.508 "null", 00:26:38.508 "ffdhe2048", 00:26:38.508 "ffdhe3072", 00:26:38.508 "ffdhe4096", 00:26:38.508 "ffdhe6144", 00:26:38.508 "ffdhe8192" 00:26:38.508 ], 00:26:38.508 "dhchap_digests": [ 00:26:38.508 "sha256", 00:26:38.508 "sha384", 00:26:38.508 "sha512" 00:26:38.508 ], 00:26:38.508 "disable_auto_failback": false, 00:26:38.508 "fast_io_fail_timeout_sec": 0, 00:26:38.508 "generate_uuids": false, 00:26:38.508 "high_priority_weight": 0, 00:26:38.508 "io_path_stat": false, 00:26:38.508 "io_queue_requests": 512, 00:26:38.508 "keep_alive_timeout_ms": 10000, 00:26:38.508 "low_priority_weight": 0, 00:26:38.508 "medium_priority_weight": 0, 00:26:38.508 "nvme_adminq_poll_period_us": 10000, 00:26:38.508 "nvme_error_stat": false, 00:26:38.508 "nvme_ioq_poll_period_us": 0, 00:26:38.508 "rdma_cm_event_timeout_ms": 0, 00:26:38.508 "rdma_max_cq_size": 0, 00:26:38.508 "rdma_srq_size": 0, 00:26:38.508 "reconnect_delay_sec": 0, 00:26:38.508 "timeout_admin_us": 0, 00:26:38.508 "timeout_us": 0, 00:26:38.508 "transport_ack_timeout": 0, 00:26:38.508 "transport_retry_count": 4, 00:26:38.508 "transport_tos": 0 00:26:38.508 } 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "method": "bdev_nvme_attach_controller", 00:26:38.508 "params": { 00:26:38.508 "adrfam": "IPv4", 00:26:38.508 "ctrlr_loss_timeout_sec": 0, 00:26:38.508 "ddgst": false, 00:26:38.508 "fast_io_fail_timeout_sec": 0, 00:26:38.508 "hdgst": false, 00:26:38.508 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.508 "name": "TLSTEST", 00:26:38.508 "prchk_guard": false, 00:26:38.508 "prchk_reftag": false, 00:26:38.508 "psk": "/tmp/tmp.9pftivPFwd", 00:26:38.508 "reconnect_delay_sec": 0, 00:26:38.508 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.508 "traddr": "10.0.0.2", 00:26:38.508 "trsvcid": "4420", 00:26:38.508 "trtype": "TCP" 00:26:38.508 } 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "method": "bdev_nvme_set_hotplug", 00:26:38.508 "params": { 00:26:38.508 "enable": false, 00:26:38.508 "period_us": 100000 00:26:38.508 } 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "method": "bdev_wait_for_examine" 00:26:38.508 } 00:26:38.508 ] 00:26:38.508 }, 00:26:38.508 { 00:26:38.508 "subsystem": "nbd", 00:26:38.508 "config": [] 00:26:38.508 } 00:26:38.508 ] 00:26:38.508 }' 00:26:38.508 [2024-05-15 13:44:51.522074] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:38.509 [2024-05-15 13:44:51.522191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101215 ] 00:26:38.765 [2024-05-15 13:44:51.641591] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:38.765 [2024-05-15 13:44:51.659905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.765 [2024-05-15 13:44:51.787813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.023 [2024-05-15 13:44:51.980702] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:39.023 [2024-05-15 13:44:51.980868] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:39.589 13:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:39.589 13:44:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:39.589 13:44:52 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:26:39.589 Running I/O for 10 seconds... 00:26:49.697 00:26:49.697 Latency(us) 00:26:49.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.697 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:49.697 Verification LBA range: start 0x0 length 0x2000 00:26:49.697 TLSTESTn1 : 10.02 3458.62 13.51 0.00 0.00 36930.72 7923.90 31457.28 00:26:49.697 =================================================================================================================== 00:26:49.697 Total : 3458.62 13.51 0.00 0.00 36930.72 7923.90 31457.28 00:26:49.697 0 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 101215 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101215 ']' 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101215 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101215 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101215' 00:26:49.697 killing process with pid 101215 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101215 00:26:49.697 Received shutdown signal, test time was about 10.000000 seconds 00:26:49.697 00:26:49.697 Latency(us) 00:26:49.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.697 =================================================================================================================== 00:26:49.697 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.697 [2024-05-15 13:45:02.692520] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:49.697 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101215 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 101170 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101170 ']' 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101170 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101170 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:49.956 killing process with pid 101170 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101170' 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101170 00:26:49.956 [2024-05-15 13:45:02.932796] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:49.956 [2024-05-15 13:45:02.932848] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:49.956 13:45:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101170 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101361 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101361 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101361 ']' 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:50.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:50.215 13:45:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:50.215 [2024-05-15 13:45:03.234961] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:50.215 [2024-05-15 13:45:03.235078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:50.476 [2024-05-15 13:45:03.360556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:50.476 [2024-05-15 13:45:03.379480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.476 [2024-05-15 13:45:03.507791] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:50.476 [2024-05-15 13:45:03.507873] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:50.476 [2024-05-15 13:45:03.507887] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:50.476 [2024-05-15 13:45:03.507898] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:50.476 [2024-05-15 13:45:03.507908] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:50.476 [2024-05-15 13:45:03.507947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9pftivPFwd 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9pftivPFwd 00:26:51.407 13:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:51.665 [2024-05-15 13:45:04.609400] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.665 13:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:26:51.923 13:45:04 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:26:52.181 [2024-05-15 13:45:05.157476] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:52.181 [2024-05-15 13:45:05.157654] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:52.181 [2024-05-15 13:45:05.157888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.181 13:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:26:52.439 malloc0 00:26:52.439 13:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:26:52.751 13:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9pftivPFwd 00:26:53.009 [2024-05-15 13:45:05.917289] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=101459 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 101459 /var/tmp/bdevperf.sock 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101459 ']' 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:53.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:53.009 13:45:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:53.009 [2024-05-15 13:45:06.002531] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:53.009 [2024-05-15 13:45:06.002660] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101459 ] 00:26:53.266 [2024-05-15 13:45:06.126733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:53.266 [2024-05-15 13:45:06.144317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.266 [2024-05-15 13:45:06.272312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.198 13:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:54.198 13:45:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:54.198 13:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9pftivPFwd 00:26:54.513 13:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:54.793 [2024-05-15 13:45:07.611093] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:54.793 nvme0n1 00:26:54.793 13:45:07 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:54.793 Running I/O for 1 seconds... 00:26:56.165 00:26:56.165 Latency(us) 00:26:56.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.165 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:26:56.165 Verification LBA range: start 0x0 length 0x2000 00:26:56.165 nvme0n1 : 1.02 3422.27 13.37 0.00 0.00 37003.44 7626.01 29908.25 00:26:56.165 =================================================================================================================== 00:26:56.165 Total : 3422.27 13.37 0.00 0.00 37003.44 7626.01 29908.25 00:26:56.165 0 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 101459 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101459 ']' 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101459 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101459 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:56.165 killing process with pid 101459 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101459' 00:26:56.165 Received shutdown signal, test time was about 1.000000 seconds 00:26:56.165 00:26:56.165 Latency(us) 00:26:56.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.165 =================================================================================================================== 00:26:56.165 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101459 00:26:56.165 13:45:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101459 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 101361 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101361 ']' 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101361 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101361 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:56.165 killing process with pid 101361 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101361' 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101361 00:26:56.165 [2024-05-15 13:45:09.240184] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:56.165 [2024-05-15 13:45:09.240226] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:56.165 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101361 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101541 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101541 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101541 ']' 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:56.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:56.423 13:45:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:56.680 [2024-05-15 13:45:09.523449] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:56.680 [2024-05-15 13:45:09.523545] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:56.680 [2024-05-15 13:45:09.642574] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:56.680 [2024-05-15 13:45:09.657589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.680 [2024-05-15 13:45:09.755610] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:56.680 [2024-05-15 13:45:09.755666] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:56.680 [2024-05-15 13:45:09.755678] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:56.680 [2024-05-15 13:45:09.755687] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:56.680 [2024-05-15 13:45:09.755694] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:56.680 [2024-05-15 13:45:09.755719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.614 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:57.614 [2024-05-15 13:45:10.538834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.614 malloc0 00:26:57.615 [2024-05-15 13:45:10.569852] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:57.615 [2024-05-15 13:45:10.569942] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:57.615 [2024-05-15 13:45:10.570122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=101591 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 101591 /var/tmp/bdevperf.sock 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101591 ']' 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:57.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:57.615 13:45:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:26:57.615 [2024-05-15 13:45:10.649421] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:26:57.615 [2024-05-15 13:45:10.649516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101591 ] 00:26:57.883 [2024-05-15 13:45:10.767444] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:57.883 [2024-05-15 13:45:10.779285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.883 [2024-05-15 13:45:10.878000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:58.817 13:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:58.817 13:45:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:26:58.817 13:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9pftivPFwd 00:26:59.076 13:45:11 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:26:59.334 [2024-05-15 13:45:12.187401] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:59.334 nvme0n1 00:26:59.334 13:45:12 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:59.334 Running I/O for 1 seconds... 00:27:00.726 00:27:00.726 Latency(us) 00:27:00.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.726 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:00.726 Verification LBA range: start 0x0 length 0x2000 00:27:00.726 nvme0n1 : 1.02 3911.11 15.28 0.00 0.00 32387.94 8340.95 26095.24 00:27:00.726 =================================================================================================================== 00:27:00.726 Total : 3911.11 15.28 0.00 0.00 32387.94 8340.95 26095.24 00:27:00.726 0 00:27:00.726 13:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:27:00.726 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.726 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:00.726 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.726 13:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:27:00.726 "subsystems": [ 00:27:00.726 { 00:27:00.726 "subsystem": "keyring", 00:27:00.726 "config": [ 00:27:00.726 { 00:27:00.726 "method": "keyring_file_add_key", 00:27:00.726 "params": { 00:27:00.726 "name": "key0", 00:27:00.726 "path": "/tmp/tmp.9pftivPFwd" 00:27:00.726 } 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "iobuf", 00:27:00.726 "config": [ 00:27:00.726 { 00:27:00.726 "method": "iobuf_set_options", 00:27:00.726 "params": { 00:27:00.726 "large_bufsize": 135168, 00:27:00.726 "large_pool_count": 1024, 00:27:00.726 "small_bufsize": 8192, 00:27:00.726 "small_pool_count": 8192 00:27:00.726 } 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "sock", 00:27:00.726 "config": [ 00:27:00.726 { 00:27:00.726 "method": "sock_impl_set_options", 00:27:00.726 "params": { 00:27:00.726 "enable_ktls": false, 00:27:00.726 "enable_placement_id": 0, 00:27:00.726 "enable_quickack": false, 00:27:00.726 "enable_recv_pipe": true, 00:27:00.726 "enable_zerocopy_send_client": false, 00:27:00.726 "enable_zerocopy_send_server": true, 00:27:00.726 "impl_name": "posix", 00:27:00.726 "recv_buf_size": 2097152, 00:27:00.726 "send_buf_size": 2097152, 00:27:00.726 "tls_version": 0, 00:27:00.726 "zerocopy_threshold": 0 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "sock_impl_set_options", 00:27:00.726 "params": { 00:27:00.726 "enable_ktls": false, 00:27:00.726 "enable_placement_id": 0, 00:27:00.726 "enable_quickack": false, 00:27:00.726 "enable_recv_pipe": true, 00:27:00.726 "enable_zerocopy_send_client": false, 00:27:00.726 "enable_zerocopy_send_server": true, 00:27:00.726 "impl_name": "ssl", 00:27:00.726 "recv_buf_size": 4096, 00:27:00.726 "send_buf_size": 4096, 00:27:00.726 "tls_version": 0, 00:27:00.726 "zerocopy_threshold": 0 00:27:00.726 } 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "vmd", 00:27:00.726 "config": [] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "accel", 00:27:00.726 "config": [ 00:27:00.726 { 00:27:00.726 "method": "accel_set_options", 00:27:00.726 "params": { 00:27:00.726 "buf_count": 2048, 00:27:00.726 "large_cache_size": 16, 00:27:00.726 "sequence_count": 2048, 00:27:00.726 "small_cache_size": 128, 00:27:00.726 "task_count": 2048 00:27:00.726 } 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "bdev", 00:27:00.726 "config": [ 00:27:00.726 { 00:27:00.726 "method": "bdev_set_options", 00:27:00.726 "params": { 00:27:00.726 "bdev_auto_examine": true, 00:27:00.726 "bdev_io_cache_size": 256, 00:27:00.726 "bdev_io_pool_size": 65535, 00:27:00.726 "iobuf_large_cache_size": 16, 00:27:00.726 "iobuf_small_cache_size": 128 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "bdev_raid_set_options", 00:27:00.726 "params": { 00:27:00.726 "process_window_size_kb": 1024 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "bdev_iscsi_set_options", 00:27:00.726 "params": { 00:27:00.726 "timeout_sec": 30 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "bdev_nvme_set_options", 00:27:00.726 "params": { 00:27:00.726 "action_on_timeout": "none", 00:27:00.726 "allow_accel_sequence": false, 00:27:00.726 "arbitration_burst": 0, 00:27:00.726 "bdev_retry_count": 3, 00:27:00.726 "ctrlr_loss_timeout_sec": 0, 00:27:00.726 "delay_cmd_submit": true, 00:27:00.726 "dhchap_dhgroups": [ 00:27:00.726 "null", 00:27:00.726 "ffdhe2048", 00:27:00.726 "ffdhe3072", 00:27:00.726 "ffdhe4096", 00:27:00.726 "ffdhe6144", 00:27:00.726 "ffdhe8192" 00:27:00.726 ], 00:27:00.726 "dhchap_digests": [ 00:27:00.726 "sha256", 00:27:00.726 "sha384", 00:27:00.726 "sha512" 00:27:00.726 ], 00:27:00.726 "disable_auto_failback": false, 00:27:00.726 "fast_io_fail_timeout_sec": 0, 00:27:00.726 "generate_uuids": false, 00:27:00.726 "high_priority_weight": 0, 00:27:00.726 "io_path_stat": false, 00:27:00.726 "io_queue_requests": 0, 00:27:00.726 "keep_alive_timeout_ms": 10000, 00:27:00.726 "low_priority_weight": 0, 00:27:00.726 "medium_priority_weight": 0, 00:27:00.726 "nvme_adminq_poll_period_us": 10000, 00:27:00.726 "nvme_error_stat": false, 00:27:00.726 "nvme_ioq_poll_period_us": 0, 00:27:00.726 "rdma_cm_event_timeout_ms": 0, 00:27:00.726 "rdma_max_cq_size": 0, 00:27:00.726 "rdma_srq_size": 0, 00:27:00.726 "reconnect_delay_sec": 0, 00:27:00.726 "timeout_admin_us": 0, 00:27:00.726 "timeout_us": 0, 00:27:00.726 "transport_ack_timeout": 0, 00:27:00.726 "transport_retry_count": 4, 00:27:00.726 "transport_tos": 0 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "bdev_nvme_set_hotplug", 00:27:00.726 "params": { 00:27:00.726 "enable": false, 00:27:00.726 "period_us": 100000 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "bdev_malloc_create", 00:27:00.726 "params": { 00:27:00.726 "block_size": 4096, 00:27:00.726 "name": "malloc0", 00:27:00.726 "num_blocks": 8192, 00:27:00.726 "optimal_io_boundary": 0, 00:27:00.726 "physical_block_size": 4096, 00:27:00.726 "uuid": "6f851418-fc54-4bfb-addc-60e45b57f955" 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "bdev_wait_for_examine" 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "nbd", 00:27:00.726 "config": [] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "scheduler", 00:27:00.726 "config": [ 00:27:00.726 { 00:27:00.726 "method": "framework_set_scheduler", 00:27:00.726 "params": { 00:27:00.726 "name": "static" 00:27:00.726 } 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "subsystem": "nvmf", 00:27:00.726 "config": [ 00:27:00.726 { 00:27:00.726 "method": "nvmf_set_config", 00:27:00.726 "params": { 00:27:00.726 "admin_cmd_passthru": { 00:27:00.726 "identify_ctrlr": false 00:27:00.726 }, 00:27:00.726 "discovery_filter": "match_any" 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "nvmf_set_max_subsystems", 00:27:00.726 "params": { 00:27:00.726 "max_subsystems": 1024 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "nvmf_set_crdt", 00:27:00.726 "params": { 00:27:00.726 "crdt1": 0, 00:27:00.726 "crdt2": 0, 00:27:00.726 "crdt3": 0 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "nvmf_create_transport", 00:27:00.726 "params": { 00:27:00.726 "abort_timeout_sec": 1, 00:27:00.726 "ack_timeout": 0, 00:27:00.726 "buf_cache_size": 4294967295, 00:27:00.726 "c2h_success": false, 00:27:00.726 "data_wr_pool_size": 0, 00:27:00.726 "dif_insert_or_strip": false, 00:27:00.726 "in_capsule_data_size": 4096, 00:27:00.726 "io_unit_size": 131072, 00:27:00.726 "max_aq_depth": 128, 00:27:00.726 "max_io_qpairs_per_ctrlr": 127, 00:27:00.726 "max_io_size": 131072, 00:27:00.726 "max_queue_depth": 128, 00:27:00.726 "num_shared_buffers": 511, 00:27:00.726 "sock_priority": 0, 00:27:00.726 "trtype": "TCP", 00:27:00.726 "zcopy": false 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "nvmf_create_subsystem", 00:27:00.726 "params": { 00:27:00.726 "allow_any_host": false, 00:27:00.726 "ana_reporting": false, 00:27:00.726 "max_cntlid": 65519, 00:27:00.726 "max_namespaces": 32, 00:27:00.726 "min_cntlid": 1, 00:27:00.726 "model_number": "SPDK bdev Controller", 00:27:00.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.726 "serial_number": "00000000000000000000" 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "nvmf_subsystem_add_host", 00:27:00.726 "params": { 00:27:00.726 "host": "nqn.2016-06.io.spdk:host1", 00:27:00.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.726 "psk": "key0" 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "nvmf_subsystem_add_ns", 00:27:00.726 "params": { 00:27:00.726 "namespace": { 00:27:00.726 "bdev_name": "malloc0", 00:27:00.726 "nguid": "6F851418FC544BFBADDC60E45B57F955", 00:27:00.726 "no_auto_visible": false, 00:27:00.726 "nsid": 1, 00:27:00.726 "uuid": "6f851418-fc54-4bfb-addc-60e45b57f955" 00:27:00.726 }, 00:27:00.726 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:27:00.726 } 00:27:00.726 }, 00:27:00.726 { 00:27:00.726 "method": "nvmf_subsystem_add_listener", 00:27:00.726 "params": { 00:27:00.726 "listen_address": { 00:27:00.726 "adrfam": "IPv4", 00:27:00.726 "traddr": "10.0.0.2", 00:27:00.726 "trsvcid": "4420", 00:27:00.726 "trtype": "TCP" 00:27:00.726 }, 00:27:00.726 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.726 "secure_channel": true 00:27:00.726 } 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 } 00:27:00.726 ] 00:27:00.726 }' 00:27:00.726 13:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:27:00.984 "subsystems": [ 00:27:00.984 { 00:27:00.984 "subsystem": "keyring", 00:27:00.984 "config": [ 00:27:00.984 { 00:27:00.984 "method": "keyring_file_add_key", 00:27:00.984 "params": { 00:27:00.984 "name": "key0", 00:27:00.984 "path": "/tmp/tmp.9pftivPFwd" 00:27:00.984 } 00:27:00.984 } 00:27:00.984 ] 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "subsystem": "iobuf", 00:27:00.984 "config": [ 00:27:00.984 { 00:27:00.984 "method": "iobuf_set_options", 00:27:00.984 "params": { 00:27:00.984 "large_bufsize": 135168, 00:27:00.984 "large_pool_count": 1024, 00:27:00.984 "small_bufsize": 8192, 00:27:00.984 "small_pool_count": 8192 00:27:00.984 } 00:27:00.984 } 00:27:00.984 ] 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "subsystem": "sock", 00:27:00.984 "config": [ 00:27:00.984 { 00:27:00.984 "method": "sock_impl_set_options", 00:27:00.984 "params": { 00:27:00.984 "enable_ktls": false, 00:27:00.984 "enable_placement_id": 0, 00:27:00.984 "enable_quickack": false, 00:27:00.984 "enable_recv_pipe": true, 00:27:00.984 "enable_zerocopy_send_client": false, 00:27:00.984 "enable_zerocopy_send_server": true, 00:27:00.984 "impl_name": "posix", 00:27:00.984 "recv_buf_size": 2097152, 00:27:00.984 "send_buf_size": 2097152, 00:27:00.984 "tls_version": 0, 00:27:00.984 "zerocopy_threshold": 0 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "sock_impl_set_options", 00:27:00.984 "params": { 00:27:00.984 "enable_ktls": false, 00:27:00.984 "enable_placement_id": 0, 00:27:00.984 "enable_quickack": false, 00:27:00.984 "enable_recv_pipe": true, 00:27:00.984 "enable_zerocopy_send_client": false, 00:27:00.984 "enable_zerocopy_send_server": true, 00:27:00.984 "impl_name": "ssl", 00:27:00.984 "recv_buf_size": 4096, 00:27:00.984 "send_buf_size": 4096, 00:27:00.984 "tls_version": 0, 00:27:00.984 "zerocopy_threshold": 0 00:27:00.984 } 00:27:00.984 } 00:27:00.984 ] 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "subsystem": "vmd", 00:27:00.984 "config": [] 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "subsystem": "accel", 00:27:00.984 "config": [ 00:27:00.984 { 00:27:00.984 "method": "accel_set_options", 00:27:00.984 "params": { 00:27:00.984 "buf_count": 2048, 00:27:00.984 "large_cache_size": 16, 00:27:00.984 "sequence_count": 2048, 00:27:00.984 "small_cache_size": 128, 00:27:00.984 "task_count": 2048 00:27:00.984 } 00:27:00.984 } 00:27:00.984 ] 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "subsystem": "bdev", 00:27:00.984 "config": [ 00:27:00.984 { 00:27:00.984 "method": "bdev_set_options", 00:27:00.984 "params": { 00:27:00.984 "bdev_auto_examine": true, 00:27:00.984 "bdev_io_cache_size": 256, 00:27:00.984 "bdev_io_pool_size": 65535, 00:27:00.984 "iobuf_large_cache_size": 16, 00:27:00.984 "iobuf_small_cache_size": 128 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "bdev_raid_set_options", 00:27:00.984 "params": { 00:27:00.984 "process_window_size_kb": 1024 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "bdev_iscsi_set_options", 00:27:00.984 "params": { 00:27:00.984 "timeout_sec": 30 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "bdev_nvme_set_options", 00:27:00.984 "params": { 00:27:00.984 "action_on_timeout": "none", 00:27:00.984 "allow_accel_sequence": false, 00:27:00.984 "arbitration_burst": 0, 00:27:00.984 "bdev_retry_count": 3, 00:27:00.984 "ctrlr_loss_timeout_sec": 0, 00:27:00.984 "delay_cmd_submit": true, 00:27:00.984 "dhchap_dhgroups": [ 00:27:00.984 "null", 00:27:00.984 "ffdhe2048", 00:27:00.984 "ffdhe3072", 00:27:00.984 "ffdhe4096", 00:27:00.984 "ffdhe6144", 00:27:00.984 "ffdhe8192" 00:27:00.984 ], 00:27:00.984 "dhchap_digests": [ 00:27:00.984 "sha256", 00:27:00.984 "sha384", 00:27:00.984 "sha512" 00:27:00.984 ], 00:27:00.984 "disable_auto_failback": false, 00:27:00.984 "fast_io_fail_timeout_sec": 0, 00:27:00.984 "generate_uuids": false, 00:27:00.984 "high_priority_weight": 0, 00:27:00.984 "io_path_stat": false, 00:27:00.984 "io_queue_requests": 512, 00:27:00.984 "keep_alive_timeout_ms": 10000, 00:27:00.984 "low_priority_weight": 0, 00:27:00.984 "medium_priority_weight": 0, 00:27:00.984 "nvme_adminq_poll_period_us": 10000, 00:27:00.984 "nvme_error_stat": false, 00:27:00.984 "nvme_ioq_poll_period_us": 0, 00:27:00.984 "rdma_cm_event_timeout_ms": 0, 00:27:00.984 "rdma_max_cq_size": 0, 00:27:00.984 "rdma_srq_size": 0, 00:27:00.984 "reconnect_delay_sec": 0, 00:27:00.984 "timeout_admin_us": 0, 00:27:00.984 "timeout_us": 0, 00:27:00.984 "transport_ack_timeout": 0, 00:27:00.984 "transport_retry_count": 4, 00:27:00.984 "transport_tos": 0 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "bdev_nvme_attach_controller", 00:27:00.984 "params": { 00:27:00.984 "adrfam": "IPv4", 00:27:00.984 "ctrlr_loss_timeout_sec": 0, 00:27:00.984 "ddgst": false, 00:27:00.984 "fast_io_fail_timeout_sec": 0, 00:27:00.984 "hdgst": false, 00:27:00.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:00.984 "name": "nvme0", 00:27:00.984 "prchk_guard": false, 00:27:00.984 "prchk_reftag": false, 00:27:00.984 "psk": "key0", 00:27:00.984 "reconnect_delay_sec": 0, 00:27:00.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:00.984 "traddr": "10.0.0.2", 00:27:00.984 "trsvcid": "4420", 00:27:00.984 "trtype": "TCP" 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "bdev_nvme_set_hotplug", 00:27:00.984 "params": { 00:27:00.984 "enable": false, 00:27:00.984 "period_us": 100000 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "bdev_enable_histogram", 00:27:00.984 "params": { 00:27:00.984 "enable": true, 00:27:00.984 "name": "nvme0n1" 00:27:00.984 } 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "method": "bdev_wait_for_examine" 00:27:00.984 } 00:27:00.984 ] 00:27:00.984 }, 00:27:00.984 { 00:27:00.984 "subsystem": "nbd", 00:27:00.984 "config": [] 00:27:00.984 } 00:27:00.984 ] 00:27:00.984 }' 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 101591 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101591 ']' 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101591 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101591 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:00.984 killing process with pid 101591 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101591' 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101591 00:27:00.984 Received shutdown signal, test time was about 1.000000 seconds 00:27:00.984 00:27:00.984 Latency(us) 00:27:00.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.984 =================================================================================================================== 00:27:00.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.984 13:45:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101591 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 101541 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101541 ']' 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101541 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101541 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:01.242 killing process with pid 101541 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101541' 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101541 00:27:01.242 [2024-05-15 13:45:14.202250] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:01.242 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101541 00:27:01.499 13:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:27:01.499 13:45:14 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:27:01.500 "subsystems": [ 00:27:01.500 { 00:27:01.500 "subsystem": "keyring", 00:27:01.500 "config": [ 00:27:01.500 { 00:27:01.500 "method": "keyring_file_add_key", 00:27:01.500 "params": { 00:27:01.500 "name": "key0", 00:27:01.500 "path": "/tmp/tmp.9pftivPFwd" 00:27:01.500 } 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "iobuf", 00:27:01.500 "config": [ 00:27:01.500 { 00:27:01.500 "method": "iobuf_set_options", 00:27:01.500 "params": { 00:27:01.500 "large_bufsize": 135168, 00:27:01.500 "large_pool_count": 1024, 00:27:01.500 "small_bufsize": 8192, 00:27:01.500 "small_pool_count": 8192 00:27:01.500 } 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "sock", 00:27:01.500 "config": [ 00:27:01.500 { 00:27:01.500 "method": "sock_impl_set_options", 00:27:01.500 "params": { 00:27:01.500 "enable_ktls": false, 00:27:01.500 "enable_placement_id": 0, 00:27:01.500 "enable_quickack": false, 00:27:01.500 "enable_recv_pipe": true, 00:27:01.500 "enable_zerocopy_send_client": false, 00:27:01.500 "enable_zerocopy_send_server": true, 00:27:01.500 "impl_name": "posix", 00:27:01.500 "recv_buf_size": 2097152, 00:27:01.500 "send_buf_size": 2097152, 00:27:01.500 "tls_version": 0, 00:27:01.500 "zerocopy_threshold": 0 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "sock_impl_set_options", 00:27:01.500 "params": { 00:27:01.500 "enable_ktls": false, 00:27:01.500 "enable_placement_id": 0, 00:27:01.500 "enable_quickack": false, 00:27:01.500 "enable_recv_pipe": true, 00:27:01.500 "enable_zerocopy_send_client": false, 00:27:01.500 "enable_zerocopy_send_server": true, 00:27:01.500 "impl_name": "ssl", 00:27:01.500 "recv_buf_size": 4096, 00:27:01.500 "send_buf_size": 4096, 00:27:01.500 "tls_version": 0, 00:27:01.500 "zerocopy_threshold": 0 00:27:01.500 } 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "vmd", 00:27:01.500 "config": [] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "accel", 00:27:01.500 "config": [ 00:27:01.500 { 00:27:01.500 "method": "accel_set_options", 00:27:01.500 "params": { 00:27:01.500 "buf_count": 2048, 00:27:01.500 "large_cache_size": 16, 00:27:01.500 "sequence_count": 2048, 00:27:01.500 "small_cache_size": 128, 00:27:01.500 "task_count": 2048 00:27:01.500 } 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "bdev", 00:27:01.500 "config": [ 00:27:01.500 { 00:27:01.500 "method": "bdev_set_options", 00:27:01.500 "params": { 00:27:01.500 "bdev_auto_examine": true, 00:27:01.500 "bdev_io_cache_size": 256, 00:27:01.500 "bdev_io_pool_size": 65535, 00:27:01.500 "iobuf_large_cache_size": 16, 00:27:01.500 "iobuf_small_cache_size": 128 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "bdev_raid_set_options", 00:27:01.500 "params": { 00:27:01.500 "process_window_size_kb": 1024 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "bdev_iscsi_set_options", 00:27:01.500 "params": { 00:27:01.500 "timeout_sec": 30 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "bdev_nvme_set_options", 00:27:01.500 "params": { 00:27:01.500 "action_on_timeout": "none", 00:27:01.500 "allow_accel_sequence": false, 00:27:01.500 "arbitration_burst": 0, 00:27:01.500 "bdev_retry_count": 3, 00:27:01.500 "ctrlr_loss_timeout_sec": 0, 00:27:01.500 "delay_cmd_submit": true, 00:27:01.500 "dhchap_dhgroups": [ 00:27:01.500 "null", 00:27:01.500 "ffdhe2048", 00:27:01.500 "ffdhe3072", 00:27:01.500 "ffdhe4096", 00:27:01.500 "ffdhe6144", 00:27:01.500 "ffdhe8192" 00:27:01.500 ], 00:27:01.500 "dhchap_digests": [ 00:27:01.500 "sha256", 00:27:01.500 "sha384", 00:27:01.500 "sha512" 00:27:01.500 ], 00:27:01.500 "disable_auto_failback": false, 00:27:01.500 "fast_io_fail_timeout_sec": 0, 00:27:01.500 "generate_uuids": false, 00:27:01.500 "high_priority_weight": 0, 00:27:01.500 "io_path_stat": false, 00:27:01.500 "io_queue_requests": 0, 00:27:01.500 "keep_alive_timeout_ms": 10000, 00:27:01.500 "low_priority_weight": 0, 00:27:01.500 "medium_priority_weight": 0, 00:27:01.500 "nvme_adminq_poll_period_us": 10000, 00:27:01.500 "nvme_error_stat": false, 00:27:01.500 "nvme_ioq_poll_period_us": 0, 00:27:01.500 "rdma_cm_event_timeout_ms": 0, 00:27:01.500 "rdma_max_cq_size": 0, 00:27:01.500 "rdma_srq_size": 0, 00:27:01.500 "reconnect_delay_sec": 0, 00:27:01.500 "timeout_admin_us": 0, 00:27:01.500 "timeout_us": 0, 00:27:01.500 "transport_ack_timeout": 0, 00:27:01.500 "transport_retry_count": 4, 00:27:01.500 "transport_tos": 0 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "bdev_nvme_set_hotplug", 00:27:01.500 "params": { 00:27:01.500 "enable": false, 00:27:01.500 "period_us": 100000 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "bdev_malloc_create", 00:27:01.500 "params": { 00:27:01.500 "block_size": 4096, 00:27:01.500 "name": "malloc0", 00:27:01.500 "num_blocks": 8192, 00:27:01.500 "optimal_io_boundary": 0, 00:27:01.500 "physical_block_size": 4096, 00:27:01.500 "uuid": "6f851418-fc54-4bfb-addc-60e45b57f955" 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "bdev_wait_for_examine" 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "nbd", 00:27:01.500 "config": [] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "scheduler", 00:27:01.500 "config": [ 00:27:01.500 { 00:27:01.500 "method": "framework_set_scheduler", 00:27:01.500 "params": { 00:27:01.500 "name": "static" 00:27:01.500 } 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "subsystem": "nvmf", 00:27:01.500 "config": [ 00:27:01.500 { 00:27:01.500 "method": "nvmf_set_config", 00:27:01.500 "params": { 00:27:01.500 "admin_cmd_passthru": { 00:27:01.500 "identify_ctrlr": false 00:27:01.500 }, 00:27:01.500 "discovery_filter": "match_any" 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "nvmf_set_max_subsystems", 00:27:01.500 "params": { 00:27:01.500 "max_subsystems": 1024 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "nvmf_set_crdt", 00:27:01.500 "params": { 00:27:01.500 "crdt1": 0, 00:27:01.500 "crdt2": 0, 00:27:01.500 "crdt3": 0 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "nvmf_create_transport", 00:27:01.500 "params": { 00:27:01.500 "abort_timeout_sec": 1, 00:27:01.500 "ack_timeout": 0, 00:27:01.500 "buf_cache_size": 4294967295, 00:27:01.500 "c2h_success": false, 00:27:01.500 "data_wr_pool_size": 0, 00:27:01.500 "dif_insert_or_strip": false, 00:27:01.500 "in_capsule_data_size": 4096, 00:27:01.500 "io_unit_size": 131072, 00:27:01.500 "max_aq_depth": 128, 00:27:01.500 "max_io_qpairs_per_ctrlr": 127, 00:27:01.500 "max_io_size": 131072, 00:27:01.500 "max_queue_depth": 128, 00:27:01.500 "num_shared_buffers": 511, 00:27:01.500 "sock_priority": 0, 00:27:01.500 "trtype": "TCP", 00:27:01.500 "zcopy": false 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "nvmf_create_subsystem", 00:27:01.500 "params": { 00:27:01.500 "allow_any_host": false, 00:27:01.500 "ana_reporting": false, 00:27:01.500 "max_cntlid": 65519, 00:27:01.500 "max_namespaces": 32, 00:27:01.500 "min_cntlid": 1, 00:27:01.500 "model_number": "SPDK bdev Controller", 00:27:01.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.500 "serial_number": "00000000000000000000" 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "nvmf_subsystem_add_host", 00:27:01.500 "params": { 00:27:01.500 "host": "nqn.2016-06.io.spdk:host1", 00:27:01.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.500 "psk": "key0" 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "nvmf_subsystem_add_ns", 00:27:01.500 "params": { 00:27:01.500 "namespace": { 00:27:01.500 "bdev_name": "malloc0", 00:27:01.500 "nguid": "6F851418FC544BFBADDC60E45B57F955", 00:27:01.500 "no_auto_visible": false, 00:27:01.500 "nsid": 1, 00:27:01.500 "uuid": "6f851418-fc54-4bfb-addc-60e45b57f955" 00:27:01.500 }, 00:27:01.500 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:27:01.500 } 00:27:01.500 }, 00:27:01.500 { 00:27:01.500 "method": "nvmf_subsystem_add_listener", 00:27:01.500 "params": { 00:27:01.500 "listen_address": { 00:27:01.500 "adrfam": "IPv4", 00:27:01.500 "traddr": "10.0.0.2", 00:27:01.500 "trsvcid": "4420", 00:27:01.500 "trtype": "TCP" 00:27:01.500 }, 00:27:01.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.500 "secure_channel": true 00:27:01.500 } 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 } 00:27:01.500 ] 00:27:01.500 }' 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=101682 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 101682 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101682 ']' 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:01.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:01.500 13:45:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:01.500 [2024-05-15 13:45:14.494852] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:01.500 [2024-05-15 13:45:14.494958] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.757 [2024-05-15 13:45:14.613499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:01.757 [2024-05-15 13:45:14.625399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.757 [2024-05-15 13:45:14.723594] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.757 [2024-05-15 13:45:14.723678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.757 [2024-05-15 13:45:14.723690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.757 [2024-05-15 13:45:14.723699] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.757 [2024-05-15 13:45:14.723707] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.757 [2024-05-15 13:45:14.723800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.015 [2024-05-15 13:45:14.951492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.015 [2024-05-15 13:45:14.983385] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:02.015 [2024-05-15 13:45:14.983494] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:02.015 [2024-05-15 13:45:14.983717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=101726 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 101726 /var/tmp/bdevperf.sock 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 101726 ']' 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:02.581 13:45:15 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:27:02.581 "subsystems": [ 00:27:02.581 { 00:27:02.581 "subsystem": "keyring", 00:27:02.581 "config": [ 00:27:02.581 { 00:27:02.581 "method": "keyring_file_add_key", 00:27:02.581 "params": { 00:27:02.581 "name": "key0", 00:27:02.581 "path": "/tmp/tmp.9pftivPFwd" 00:27:02.581 } 00:27:02.581 } 00:27:02.581 ] 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "subsystem": "iobuf", 00:27:02.581 "config": [ 00:27:02.581 { 00:27:02.581 "method": "iobuf_set_options", 00:27:02.581 "params": { 00:27:02.581 "large_bufsize": 135168, 00:27:02.581 "large_pool_count": 1024, 00:27:02.581 "small_bufsize": 8192, 00:27:02.581 "small_pool_count": 8192 00:27:02.581 } 00:27:02.581 } 00:27:02.581 ] 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "subsystem": "sock", 00:27:02.581 "config": [ 00:27:02.581 { 00:27:02.581 "method": "sock_impl_set_options", 00:27:02.581 "params": { 00:27:02.581 "enable_ktls": false, 00:27:02.581 "enable_placement_id": 0, 00:27:02.581 "enable_quickack": false, 00:27:02.581 "enable_recv_pipe": true, 00:27:02.581 "enable_zerocopy_send_client": false, 00:27:02.581 "enable_zerocopy_send_server": true, 00:27:02.581 "impl_name": "posix", 00:27:02.581 "recv_buf_size": 2097152, 00:27:02.581 "send_buf_size": 2097152, 00:27:02.581 "tls_version": 0, 00:27:02.581 "zerocopy_threshold": 0 00:27:02.581 } 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "method": "sock_impl_set_options", 00:27:02.581 "params": { 00:27:02.581 "enable_ktls": false, 00:27:02.581 "enable_placement_id": 0, 00:27:02.581 "enable_quickack": false, 00:27:02.581 "enable_recv_pipe": true, 00:27:02.581 "enable_zerocopy_send_client": false, 00:27:02.581 "enable_zerocopy_send_server": true, 00:27:02.581 "impl_name": "ssl", 00:27:02.581 "recv_buf_size": 4096, 00:27:02.581 "send_buf_size": 4096, 00:27:02.581 "tls_version": 0, 00:27:02.581 "zerocopy_threshold": 0 00:27:02.581 } 00:27:02.581 } 00:27:02.581 ] 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "subsystem": "vmd", 00:27:02.581 "config": [] 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "subsystem": "accel", 00:27:02.581 "config": [ 00:27:02.581 { 00:27:02.581 "method": "accel_set_options", 00:27:02.581 "params": { 00:27:02.581 "buf_count": 2048, 00:27:02.581 "large_cache_size": 16, 00:27:02.581 "sequence_count": 2048, 00:27:02.581 "small_cache_size": 128, 00:27:02.581 "task_count": 2048 00:27:02.581 } 00:27:02.581 } 00:27:02.581 ] 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "subsystem": "bdev", 00:27:02.581 "config": [ 00:27:02.581 { 00:27:02.581 "method": "bdev_set_options", 00:27:02.581 "params": { 00:27:02.581 "bdev_auto_examine": true, 00:27:02.581 "bdev_io_cache_size": 256, 00:27:02.581 "bdev_io_pool_size": 65535, 00:27:02.581 "iobuf_large_cache_size": 16, 00:27:02.581 "iobuf_small_cache_size": 128 00:27:02.581 } 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "method": "bdev_raid_set_options", 00:27:02.581 "params": { 00:27:02.581 "process_window_size_kb": 1024 00:27:02.581 } 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "method": "bdev_iscsi_set_options", 00:27:02.581 "params": { 00:27:02.581 "timeout_sec": 30 00:27:02.581 } 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "method": "bdev_nvme_set_options", 00:27:02.581 "params": { 00:27:02.581 "action_on_timeout": "none", 00:27:02.581 "allow_accel_sequence": false, 00:27:02.581 "arbitration_burst": 0, 00:27:02.581 "bdev_retry_count": 3, 00:27:02.581 "ctrlr_loss_timeout_sec": 0, 00:27:02.581 "delay_cmd_submit": true, 00:27:02.581 "dhchap_dhgroups": [ 00:27:02.581 "null", 00:27:02.581 "ffdhe2048", 00:27:02.581 "ffdhe3072", 00:27:02.581 "ffdhe4096", 00:27:02.581 "ffdhe6144", 00:27:02.581 "ffdhe8192" 00:27:02.581 ], 00:27:02.581 "dhchap_digests": [ 00:27:02.581 "sha256", 00:27:02.581 "sha384", 00:27:02.581 "sha512" 00:27:02.581 ], 00:27:02.581 "disable_auto_failback": false, 00:27:02.581 "fast_io_fail_timeout_sec": 0, 00:27:02.581 "generate_uuids": false, 00:27:02.581 "high_priority_weight": 0, 00:27:02.581 "io_path_stat": false, 00:27:02.581 "io_queue_requests": 512, 00:27:02.581 "keep_alive_timeout_ms": 10000, 00:27:02.581 "low_priority_weight": 0, 00:27:02.581 "medium_priority_weight": 0, 00:27:02.581 "nvme_adminq_poll_period_us": 10000, 00:27:02.581 "nvme_error_stat": false, 00:27:02.581 "nvme_ioq_poll_period_us": 0, 00:27:02.581 "rdma_cm_event_timeout_ms": 0, 00:27:02.581 "rdma_max_cq_size": 0, 00:27:02.581 "rdma_srq_size": 0, 00:27:02.581 "reconnect_delay_sec": 0, 00:27:02.581 "timeout_admin_us": 0, 00:27:02.581 "timeout_us": 0, 00:27:02.581 "transport_ack_timeout": 0, 00:27:02.581 "transport_retry_count": 4, 00:27:02.581 "transport_tos": 0 00:27:02.581 } 00:27:02.581 }, 00:27:02.581 { 00:27:02.581 "method": "bdev_nvme_attach_controller", 00:27:02.581 "params": { 00:27:02.581 "adrfam": "IPv4", 00:27:02.581 "ctrlr_loss_timeout_sec": 0, 00:27:02.581 "ddgst": false, 00:27:02.581 "fast_io_fail_timeout_sec": 0, 00:27:02.581 "hdgst": false, 00:27:02.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:02.581 "name": "nvme0", 00:27:02.581 "prchk_guard": false, 00:27:02.582 "prchk_reftag": false, 00:27:02.582 "psk": "key0", 00:27:02.582 "reconnect_delay_sec": 0, 00:27:02.582 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.582 "traddr": "10.0.0.2", 00:27:02.582 "trsvcid": "4420", 00:27:02.582 "trtype": "TCP" 00:27:02.582 } 00:27:02.582 }, 00:27:02.582 { 00:27:02.582 "method": "bdev_nvme_set_hotplug", 00:27:02.582 "params": { 00:27:02.582 "enable": false, 00:27:02.582 "period_us": 100000 00:27:02.582 } 00:27:02.582 }, 00:27:02.582 { 00:27:02.582 "method": "bdev_enable_histogram", 00:27:02.582 "params": { 00:27:02.582 "enable": true, 00:27:02.582 "name": "nvme0n1" 00:27:02.582 } 00:27:02.582 }, 00:27:02.582 { 00:27:02.582 "method": "bdev_wait_for_examine" 00:27:02.582 } 00:27:02.582 ] 00:27:02.582 }, 00:27:02.582 { 00:27:02.582 "subsystem": "nbd", 00:27:02.582 "config": [] 00:27:02.582 } 00:27:02.582 ] 00:27:02.582 }' 00:27:02.582 13:45:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:02.582 [2024-05-15 13:45:15.570517] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:02.582 [2024-05-15 13:45:15.570628] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101726 ] 00:27:02.840 [2024-05-15 13:45:15.691528] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:02.840 [2024-05-15 13:45:15.705845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.840 [2024-05-15 13:45:15.813924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.110 [2024-05-15 13:45:15.983491] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:03.677 13:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:03.677 13:45:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:27:03.677 13:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:03.677 13:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:27:03.934 13:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.934 13:45:16 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:03.934 Running I/O for 1 seconds... 00:27:04.868 00:27:04.868 Latency(us) 00:27:04.868 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:04.868 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:04.868 Verification LBA range: start 0x0 length 0x2000 00:27:04.868 nvme0n1 : 1.03 3836.51 14.99 0.00 0.00 32944.19 7417.48 20494.89 00:27:04.868 =================================================================================================================== 00:27:04.868 Total : 3836.51 14.99 0.00 0.00 32944.19 7417.48 20494.89 00:27:04.868 0 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:27:04.868 13:45:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:04.868 nvmf_trace.0 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 101726 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101726 ']' 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101726 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101726 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:05.127 killing process with pid 101726 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101726' 00:27:05.127 Received shutdown signal, test time was about 1.000000 seconds 00:27:05.127 00:27:05.127 Latency(us) 00:27:05.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:05.127 =================================================================================================================== 00:27:05.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101726 00:27:05.127 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101726 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:05.396 rmmod nvme_tcp 00:27:05.396 rmmod nvme_fabrics 00:27:05.396 rmmod nvme_keyring 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 101682 ']' 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 101682 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 101682 ']' 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 101682 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101682 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:05.396 killing process with pid 101682 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101682' 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 101682 00:27:05.396 [2024-05-15 13:45:18.367336] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:05.396 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 101682 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.2gb0laY3JA /tmp/tmp.ctqFXkYsxW /tmp/tmp.9pftivPFwd 00:27:05.656 ************************************ 00:27:05.656 END TEST nvmf_tls 00:27:05.656 ************************************ 00:27:05.656 00:27:05.656 real 1m28.767s 00:27:05.656 user 2m19.692s 00:27:05.656 sys 0m29.989s 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:05.656 13:45:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:27:05.656 13:45:18 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:27:05.656 13:45:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:05.656 13:45:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:05.656 13:45:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:05.656 ************************************ 00:27:05.656 START TEST nvmf_fips 00:27:05.656 ************************************ 00:27:05.656 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:27:05.656 * Looking for test storage... 00:27:05.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:27:05.656 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:05.917 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:27:05.918 Error setting digest 00:27:05.918 00E2C4CF347F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:27:05.918 00E2C4CF347F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:05.918 Cannot find device "nvmf_tgt_br" 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:05.918 Cannot find device "nvmf_tgt_br2" 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:05.918 13:45:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:05.918 Cannot find device "nvmf_tgt_br" 00:27:05.918 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:27:05.918 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:06.187 Cannot find device "nvmf_tgt_br2" 00:27:06.187 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:27:06.187 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:06.187 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:06.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:06.188 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:06.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:27:06.188 00:27:06.188 --- 10.0.0.2 ping statistics --- 00:27:06.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.188 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:06.188 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:06.188 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:27:06.188 00:27:06.188 --- 10.0.0.3 ping statistics --- 00:27:06.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.188 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:06.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:27:06.188 00:27:06.188 --- 10.0.0.1 ping statistics --- 00:27:06.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.188 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:06.188 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=102007 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 102007 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 102007 ']' 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:06.446 13:45:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:06.446 [2024-05-15 13:45:19.357215] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:06.446 [2024-05-15 13:45:19.357348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.446 [2024-05-15 13:45:19.477778] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:06.446 [2024-05-15 13:45:19.494542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.703 [2024-05-15 13:45:19.597309] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.703 [2024-05-15 13:45:19.597374] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.703 [2024-05-15 13:45:19.597388] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.703 [2024-05-15 13:45:19.597398] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.703 [2024-05-15 13:45:19.597407] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.703 [2024-05-15 13:45:19.597443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:07.636 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:07.636 [2024-05-15 13:45:20.680153] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.636 [2024-05-15 13:45:20.696076] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:07.636 [2024-05-15 13:45:20.696157] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:07.636 [2024-05-15 13:45:20.696364] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.636 [2024-05-15 13:45:20.727183] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:07.636 malloc0 00:27:07.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=102065 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 102065 /var/tmp/bdevperf.sock 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 102065 ']' 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:07.894 13:45:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:07.894 [2024-05-15 13:45:20.829279] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:07.894 [2024-05-15 13:45:20.829399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102065 ] 00:27:07.894 [2024-05-15 13:45:20.949039] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:07.895 [2024-05-15 13:45:20.968022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.151 [2024-05-15 13:45:21.063429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:08.718 13:45:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:08.718 13:45:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:27:08.718 13:45:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:09.283 [2024-05-15 13:45:22.083394] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:09.283 [2024-05-15 13:45:22.083520] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:09.283 TLSTESTn1 00:27:09.283 13:45:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:09.283 Running I/O for 10 seconds... 00:27:19.253 00:27:19.253 Latency(us) 00:27:19.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.253 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:19.253 Verification LBA range: start 0x0 length 0x2000 00:27:19.253 TLSTESTn1 : 10.02 3897.23 15.22 0.00 0.00 32779.76 7208.96 25737.77 00:27:19.253 =================================================================================================================== 00:27:19.253 Total : 3897.23 15.22 0.00 0.00 32779.76 7208.96 25737.77 00:27:19.253 0 00:27:19.253 13:45:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:27:19.253 13:45:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:27:19.253 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:27:19.254 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:27:19.254 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:27:19.254 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:19.254 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:27:19.254 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:27:19.254 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:27:19.254 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:19.254 nvmf_trace.0 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 102065 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 102065 ']' 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 102065 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102065 00:27:19.513 killing process with pid 102065 00:27:19.513 Received shutdown signal, test time was about 10.000000 seconds 00:27:19.513 00:27:19.513 Latency(us) 00:27:19.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:19.513 =================================================================================================================== 00:27:19.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102065' 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 102065 00:27:19.513 [2024-05-15 13:45:32.463809] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:19.513 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 102065 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:19.771 rmmod nvme_tcp 00:27:19.771 rmmod nvme_fabrics 00:27:19.771 rmmod nvme_keyring 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 102007 ']' 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 102007 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 102007 ']' 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 102007 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102007 00:27:19.771 killing process with pid 102007 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102007' 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 102007 00:27:19.771 [2024-05-15 13:45:32.802811] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:19.771 [2024-05-15 13:45:32.802856] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:19.771 13:45:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 102007 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:27:20.028 00:27:20.028 real 0m14.377s 00:27:20.028 user 0m19.892s 00:27:20.028 sys 0m5.627s 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:20.028 13:45:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:27:20.028 ************************************ 00:27:20.028 END TEST nvmf_fips 00:27:20.028 ************************************ 00:27:20.028 13:45:33 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:27:20.028 13:45:33 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:20.028 13:45:33 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:20.028 13:45:33 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:20.028 13:45:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.028 ************************************ 00:27:20.028 START TEST nvmf_fuzz 00:27:20.029 ************************************ 00:27:20.029 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:27:20.287 * Looking for test storage... 00:27:20.287 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.287 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:20.288 Cannot find device "nvmf_tgt_br" 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:20.288 Cannot find device "nvmf_tgt_br2" 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:20.288 Cannot find device "nvmf_tgt_br" 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:20.288 Cannot find device "nvmf_tgt_br2" 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:20.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:20.288 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:20.288 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:20.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:27:20.545 00:27:20.545 --- 10.0.0.2 ping statistics --- 00:27:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.545 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:20.545 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:20.545 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:27:20.545 00:27:20.545 --- 10.0.0.3 ping statistics --- 00:27:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.545 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:20.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:27:20.545 00:27:20.545 --- 10.0.0.1 ping statistics --- 00:27:20.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.545 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=102400 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 102400 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 102400 ']' 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:20.545 13:45:33 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.917 Malloc0 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:27:21.917 13:45:34 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:27:22.175 Shutting down the fuzz application 00:27:22.175 13:45:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:27:22.432 Shutting down the fuzz application 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:22.432 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:22.432 rmmod nvme_tcp 00:27:22.690 rmmod nvme_fabrics 00:27:22.690 rmmod nvme_keyring 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 102400 ']' 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 102400 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 102400 ']' 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 102400 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102400 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:22.690 killing process with pid 102400 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102400' 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 102400 00:27:22.690 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 102400 00:27:22.948 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:22.948 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:22.948 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:22.948 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.948 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.948 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.949 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.949 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.949 13:45:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:22.949 13:45:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:27:22.949 00:27:22.949 real 0m2.790s 00:27:22.949 user 0m2.936s 00:27:22.949 sys 0m0.683s 00:27:22.949 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:22.949 ************************************ 00:27:22.949 END TEST nvmf_fuzz 00:27:22.949 ************************************ 00:27:22.949 13:45:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:27:22.949 13:45:35 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:22.949 13:45:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:22.949 13:45:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:22.949 13:45:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:22.949 ************************************ 00:27:22.949 START TEST nvmf_multiconnection 00:27:22.949 ************************************ 00:27:22.949 13:45:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:27:22.949 * Looking for test storage... 00:27:22.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.949 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:23.207 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:23.207 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:23.208 Cannot find device "nvmf_tgt_br" 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:23.208 Cannot find device "nvmf_tgt_br2" 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:23.208 Cannot find device "nvmf_tgt_br" 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:23.208 Cannot find device "nvmf_tgt_br2" 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:23.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:23.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:23.208 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:23.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:27:23.467 00:27:23.467 --- 10.0.0.2 ping statistics --- 00:27:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.467 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:23.467 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:23.467 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:27:23.467 00:27:23.467 --- 10.0.0.3 ping statistics --- 00:27:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.467 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:23.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:27:23.467 00:27:23.467 --- 10.0.0.1 ping statistics --- 00:27:23.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.467 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=102611 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 102611 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 102611 ']' 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:23.467 13:45:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:23.467 [2024-05-15 13:45:36.438642] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:27:23.467 [2024-05-15 13:45:36.438751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.726 [2024-05-15 13:45:36.568107] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:23.726 [2024-05-15 13:45:36.583055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:23.726 [2024-05-15 13:45:36.687305] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.726 [2024-05-15 13:45:36.687365] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.726 [2024-05-15 13:45:36.687380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.726 [2024-05-15 13:45:36.687391] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.726 [2024-05-15 13:45:36.687400] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.726 [2024-05-15 13:45:36.687531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.726 [2024-05-15 13:45:36.688028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:23.726 [2024-05-15 13:45:36.688092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:23.726 [2024-05-15 13:45:36.688101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.657 [2024-05-15 13:45:37.491797] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.657 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 Malloc1 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 [2024-05-15 13:45:37.563206] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:24.658 [2024-05-15 13:45:37.563502] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 Malloc2 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 Malloc3 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 Malloc4 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 Malloc5 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.658 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 Malloc6 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 Malloc7 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 Malloc8 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 Malloc9 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.916 Malloc10 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.916 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.917 13:45:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:24.917 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.917 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:24.917 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:27:24.917 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.917 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.175 Malloc11 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.175 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:27:25.176 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:25.176 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:25.176 13:45:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:27:25.176 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:25.176 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:25.176 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:25.176 13:45:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:27.730 13:45:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:29.647 13:45:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:31.542 13:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:27:31.799 13:45:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:27:31.799 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:31.799 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:31.799 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:31.799 13:45:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:34.322 13:45:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:36.219 13:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:36.219 13:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:36.219 13:45:48 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:36.219 13:45:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.118 13:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:27:38.376 13:45:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:27:38.376 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:38.376 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:38.376 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:38.376 13:45:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:40.308 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:40.308 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:40.308 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:27:40.308 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:40.308 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:40.308 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:40.308 13:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:40.309 13:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:40.564 13:45:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:40.564 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:40.564 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:40.564 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:40.564 13:45:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:43.087 13:45:55 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:44.982 13:45:57 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:46.879 13:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:46.879 13:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:46.879 13:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:27:47.136 13:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:47.136 13:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:47.136 13:45:59 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:47.136 13:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:47.136 13:45:59 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:47.136 13:46:00 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:47.136 13:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:27:47.136 13:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:27:47.136 13:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:27:47.136 13:46:00 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:27:49.662 13:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:27:49.662 13:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:27:49.662 13:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:27:49.662 13:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:27:49.662 13:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:27:49.662 13:46:02 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:27:49.662 13:46:02 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:49.662 [global] 00:27:49.662 thread=1 00:27:49.662 invalidate=1 00:27:49.662 rw=read 00:27:49.662 time_based=1 00:27:49.662 runtime=10 00:27:49.662 ioengine=libaio 00:27:49.662 direct=1 00:27:49.662 bs=262144 00:27:49.662 iodepth=64 00:27:49.662 norandommap=1 00:27:49.662 numjobs=1 00:27:49.662 00:27:49.662 [job0] 00:27:49.662 filename=/dev/nvme0n1 00:27:49.662 [job1] 00:27:49.662 filename=/dev/nvme10n1 00:27:49.662 [job2] 00:27:49.662 filename=/dev/nvme1n1 00:27:49.662 [job3] 00:27:49.662 filename=/dev/nvme2n1 00:27:49.662 [job4] 00:27:49.662 filename=/dev/nvme3n1 00:27:49.662 [job5] 00:27:49.662 filename=/dev/nvme4n1 00:27:49.662 [job6] 00:27:49.662 filename=/dev/nvme5n1 00:27:49.662 [job7] 00:27:49.662 filename=/dev/nvme6n1 00:27:49.662 [job8] 00:27:49.662 filename=/dev/nvme7n1 00:27:49.662 [job9] 00:27:49.662 filename=/dev/nvme8n1 00:27:49.662 [job10] 00:27:49.662 filename=/dev/nvme9n1 00:27:49.662 Could not set queue depth (nvme0n1) 00:27:49.662 Could not set queue depth (nvme10n1) 00:27:49.662 Could not set queue depth (nvme1n1) 00:27:49.662 Could not set queue depth (nvme2n1) 00:27:49.662 Could not set queue depth (nvme3n1) 00:27:49.662 Could not set queue depth (nvme4n1) 00:27:49.662 Could not set queue depth (nvme5n1) 00:27:49.662 Could not set queue depth (nvme6n1) 00:27:49.662 Could not set queue depth (nvme7n1) 00:27:49.662 Could not set queue depth (nvme8n1) 00:27:49.662 Could not set queue depth (nvme9n1) 00:27:49.662 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:49.662 fio-3.35 00:27:49.662 Starting 11 threads 00:28:01.873 00:28:01.873 job0: (groupid=0, jobs=1): err= 0: pid=103083: Wed May 15 13:46:12 2024 00:28:01.873 read: IOPS=586, BW=147MiB/s (154MB/s)(1486MiB/10125msec) 00:28:01.873 slat (usec): min=14, max=94768, avg=1621.37, stdev=6032.53 00:28:01.873 clat (msec): min=4, max=269, avg=107.22, stdev=31.26 00:28:01.873 lat (msec): min=4, max=269, avg=108.84, stdev=32.12 00:28:01.873 clat percentiles (msec): 00:28:01.873 | 1.00th=[ 44], 5.00th=[ 65], 10.00th=[ 82], 20.00th=[ 88], 00:28:01.873 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 100], 60.00th=[ 105], 00:28:01.873 | 70.00th=[ 114], 80.00th=[ 133], 90.00th=[ 148], 95.00th=[ 163], 00:28:01.873 | 99.00th=[ 199], 99.50th=[ 232], 99.90th=[ 271], 99.95th=[ 271], 00:28:01.873 | 99.99th=[ 271] 00:28:01.873 bw ( KiB/s): min=93696, max=193024, per=7.81%, avg=150503.35, stdev=31973.91, samples=20 00:28:01.873 iops : min= 366, max= 754, avg=587.80, stdev=124.90, samples=20 00:28:01.873 lat (msec) : 10=0.24%, 20=0.22%, 50=1.09%, 100=49.55%, 250=48.43% 00:28:01.873 lat (msec) : 500=0.47% 00:28:01.873 cpu : usr=0.19%, sys=1.90%, ctx=1339, majf=0, minf=4097 00:28:01.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:28:01.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.873 issued rwts: total=5943,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.873 job1: (groupid=0, jobs=1): err= 0: pid=103085: Wed May 15 13:46:12 2024 00:28:01.873 read: IOPS=649, BW=162MiB/s (170MB/s)(1642MiB/10121msec) 00:28:01.873 slat (usec): min=17, max=100910, avg=1500.56, stdev=5129.65 00:28:01.873 clat (msec): min=8, max=290, avg=96.91, stdev=31.42 00:28:01.873 lat (msec): min=8, max=290, avg=98.41, stdev=32.17 00:28:01.873 clat percentiles (msec): 00:28:01.873 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 63], 20.00th=[ 85], 00:28:01.873 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 96], 60.00th=[ 100], 00:28:01.873 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 130], 95.00th=[ 157], 00:28:01.873 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 275], 99.95th=[ 275], 00:28:01.873 | 99.99th=[ 292] 00:28:01.873 bw ( KiB/s): min=92160, max=326144, per=8.64%, avg=166476.30, stdev=45783.77, samples=20 00:28:01.873 iops : min= 360, max= 1274, avg=650.20, stdev=178.89, samples=20 00:28:01.873 lat (msec) : 10=0.06%, 20=1.40%, 50=7.03%, 100=53.23%, 250=38.15% 00:28:01.873 lat (msec) : 500=0.12% 00:28:01.873 cpu : usr=0.37%, sys=2.32%, ctx=1254, majf=0, minf=4097 00:28:01.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:28:01.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.873 issued rwts: total=6569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.873 job2: (groupid=0, jobs=1): err= 0: pid=103086: Wed May 15 13:46:12 2024 00:28:01.873 read: IOPS=624, BW=156MiB/s (164MB/s)(1582MiB/10129msec) 00:28:01.873 slat (usec): min=17, max=116049, avg=1551.43, stdev=5951.30 00:28:01.873 clat (msec): min=14, max=258, avg=100.70, stdev=26.98 00:28:01.873 lat (msec): min=14, max=269, avg=102.25, stdev=27.83 00:28:01.873 clat percentiles (msec): 00:28:01.873 | 1.00th=[ 39], 5.00th=[ 75], 10.00th=[ 81], 20.00th=[ 86], 00:28:01.873 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 97], 00:28:01.873 | 70.00th=[ 103], 80.00th=[ 110], 90.00th=[ 136], 95.00th=[ 163], 00:28:01.873 | 99.00th=[ 199], 99.50th=[ 207], 99.90th=[ 243], 99.95th=[ 259], 00:28:01.873 | 99.99th=[ 259] 00:28:01.873 bw ( KiB/s): min=93184, max=186368, per=8.33%, avg=160411.00, stdev=27464.13, samples=20 00:28:01.873 iops : min= 364, max= 728, avg=626.55, stdev=107.36, samples=20 00:28:01.873 lat (msec) : 20=0.11%, 50=1.37%, 100=64.50%, 250=33.94%, 500=0.08% 00:28:01.873 cpu : usr=0.22%, sys=2.02%, ctx=1137, majf=0, minf=4097 00:28:01.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:01.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.873 issued rwts: total=6329,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.873 job3: (groupid=0, jobs=1): err= 0: pid=103087: Wed May 15 13:46:12 2024 00:28:01.873 read: IOPS=674, BW=169MiB/s (177MB/s)(1695MiB/10049msec) 00:28:01.873 slat (usec): min=16, max=88545, avg=1470.40, stdev=5315.03 00:28:01.873 clat (msec): min=12, max=213, avg=93.19, stdev=23.25 00:28:01.873 lat (msec): min=12, max=222, avg=94.66, stdev=24.00 00:28:01.873 clat percentiles (msec): 00:28:01.873 | 1.00th=[ 41], 5.00th=[ 55], 10.00th=[ 65], 20.00th=[ 78], 00:28:01.873 | 30.00th=[ 84], 40.00th=[ 89], 50.00th=[ 93], 60.00th=[ 97], 00:28:01.873 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 125], 95.00th=[ 138], 00:28:01.873 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 176], 99.95th=[ 176], 00:28:01.873 | 99.99th=[ 213] 00:28:01.873 bw ( KiB/s): min=115200, max=259584, per=8.92%, avg=171901.55, stdev=36171.25, samples=20 00:28:01.873 iops : min= 450, max= 1014, avg=671.35, stdev=141.17, samples=20 00:28:01.873 lat (msec) : 20=0.29%, 50=3.54%, 100=64.53%, 250=31.64% 00:28:01.873 cpu : usr=0.25%, sys=2.49%, ctx=1305, majf=0, minf=4097 00:28:01.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:01.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.873 issued rwts: total=6780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.873 job4: (groupid=0, jobs=1): err= 0: pid=103088: Wed May 15 13:46:12 2024 00:28:01.873 read: IOPS=655, BW=164MiB/s (172MB/s)(1658MiB/10118msec) 00:28:01.873 slat (usec): min=16, max=80969, avg=1465.10, stdev=5007.60 00:28:01.873 clat (msec): min=16, max=265, avg=95.97, stdev=33.58 00:28:01.873 lat (msec): min=16, max=265, avg=97.43, stdev=34.24 00:28:01.873 clat percentiles (msec): 00:28:01.873 | 1.00th=[ 41], 5.00th=[ 55], 10.00th=[ 58], 20.00th=[ 65], 00:28:01.873 | 30.00th=[ 73], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 99], 00:28:01.873 | 70.00th=[ 106], 80.00th=[ 126], 90.00th=[ 142], 95.00th=[ 157], 00:28:01.873 | 99.00th=[ 188], 99.50th=[ 203], 99.90th=[ 249], 99.95th=[ 266], 00:28:01.873 | 99.99th=[ 266] 00:28:01.873 bw ( KiB/s): min=89600, max=269824, per=8.73%, avg=168161.90, stdev=51774.43, samples=20 00:28:01.873 iops : min= 350, max= 1054, avg=656.80, stdev=202.30, samples=20 00:28:01.873 lat (msec) : 20=0.24%, 50=2.08%, 100=61.15%, 250=36.44%, 500=0.09% 00:28:01.873 cpu : usr=0.23%, sys=2.10%, ctx=1445, majf=0, minf=4097 00:28:01.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:01.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.873 issued rwts: total=6633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.873 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.873 job5: (groupid=0, jobs=1): err= 0: pid=103089: Wed May 15 13:46:12 2024 00:28:01.873 read: IOPS=609, BW=152MiB/s (160MB/s)(1542MiB/10119msec) 00:28:01.873 slat (usec): min=17, max=67950, avg=1616.22, stdev=5566.96 00:28:01.873 clat (msec): min=29, max=294, avg=103.19, stdev=24.43 00:28:01.873 lat (msec): min=30, max=294, avg=104.80, stdev=25.15 00:28:01.873 clat percentiles (msec): 00:28:01.873 | 1.00th=[ 65], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 88], 00:28:01.874 | 30.00th=[ 91], 40.00th=[ 94], 50.00th=[ 97], 60.00th=[ 102], 00:28:01.874 | 70.00th=[ 107], 80.00th=[ 114], 90.00th=[ 129], 95.00th=[ 157], 00:28:01.874 | 99.00th=[ 184], 99.50th=[ 228], 99.90th=[ 271], 99.95th=[ 296], 00:28:01.874 | 99.99th=[ 296] 00:28:01.874 bw ( KiB/s): min=95872, max=182784, per=8.11%, avg=156210.15, stdev=26072.83, samples=20 00:28:01.874 iops : min= 374, max= 714, avg=610.10, stdev=101.88, samples=20 00:28:01.874 lat (msec) : 50=0.18%, 100=57.05%, 250=42.39%, 500=0.39% 00:28:01.874 cpu : usr=0.27%, sys=2.10%, ctx=1269, majf=0, minf=4097 00:28:01.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:01.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.874 issued rwts: total=6167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.874 job6: (groupid=0, jobs=1): err= 0: pid=103090: Wed May 15 13:46:12 2024 00:28:01.874 read: IOPS=595, BW=149MiB/s (156MB/s)(1507MiB/10117msec) 00:28:01.874 slat (usec): min=17, max=64215, avg=1654.58, stdev=5398.40 00:28:01.874 clat (msec): min=33, max=275, avg=105.58, stdev=22.73 00:28:01.874 lat (msec): min=33, max=275, avg=107.24, stdev=23.49 00:28:01.874 clat percentiles (msec): 00:28:01.874 | 1.00th=[ 78], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 91], 00:28:01.874 | 30.00th=[ 94], 40.00th=[ 97], 50.00th=[ 100], 60.00th=[ 104], 00:28:01.874 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 133], 95.00th=[ 159], 00:28:01.874 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 275], 99.95th=[ 275], 00:28:01.874 | 99.99th=[ 275] 00:28:01.874 bw ( KiB/s): min=92672, max=175104, per=7.93%, avg=152676.40, stdev=23440.24, samples=20 00:28:01.874 iops : min= 362, max= 684, avg=596.35, stdev=91.66, samples=20 00:28:01.874 lat (msec) : 50=0.46%, 100=51.75%, 250=47.67%, 500=0.12% 00:28:01.874 cpu : usr=0.20%, sys=2.05%, ctx=1331, majf=0, minf=4097 00:28:01.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:01.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.874 issued rwts: total=6027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.874 job7: (groupid=0, jobs=1): err= 0: pid=103091: Wed May 15 13:46:12 2024 00:28:01.874 read: IOPS=730, BW=183MiB/s (191MB/s)(1849MiB/10127msec) 00:28:01.874 slat (usec): min=17, max=67154, avg=1336.41, stdev=4811.12 00:28:01.874 clat (msec): min=32, max=258, avg=86.23, stdev=34.63 00:28:01.874 lat (msec): min=32, max=258, avg=87.56, stdev=35.33 00:28:01.874 clat percentiles (msec): 00:28:01.874 | 1.00th=[ 45], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 59], 00:28:01.874 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 75], 60.00th=[ 90], 00:28:01.874 | 70.00th=[ 97], 80.00th=[ 110], 90.00th=[ 132], 95.00th=[ 159], 00:28:01.874 | 99.00th=[ 197], 99.50th=[ 224], 99.90th=[ 259], 99.95th=[ 259], 00:28:01.874 | 99.99th=[ 259] 00:28:01.874 bw ( KiB/s): min=87040, max=273920, per=9.74%, avg=187605.15, stdev=64785.51, samples=20 00:28:01.874 iops : min= 340, max= 1070, avg=732.70, stdev=253.17, samples=20 00:28:01.874 lat (msec) : 50=3.79%, 100=68.56%, 250=27.44%, 500=0.22% 00:28:01.874 cpu : usr=0.29%, sys=2.37%, ctx=1455, majf=0, minf=4097 00:28:01.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:28:01.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.874 issued rwts: total=7394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.874 job8: (groupid=0, jobs=1): err= 0: pid=103092: Wed May 15 13:46:12 2024 00:28:01.874 read: IOPS=621, BW=155MiB/s (163MB/s)(1561MiB/10050msec) 00:28:01.874 slat (usec): min=17, max=70573, avg=1573.49, stdev=5277.34 00:28:01.874 clat (msec): min=13, max=173, avg=101.30, stdev=22.42 00:28:01.874 lat (msec): min=13, max=206, avg=102.88, stdev=23.23 00:28:01.874 clat percentiles (msec): 00:28:01.874 | 1.00th=[ 53], 5.00th=[ 71], 10.00th=[ 78], 20.00th=[ 86], 00:28:01.874 | 30.00th=[ 90], 40.00th=[ 94], 50.00th=[ 97], 60.00th=[ 103], 00:28:01.874 | 70.00th=[ 108], 80.00th=[ 121], 90.00th=[ 136], 95.00th=[ 146], 00:28:01.874 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 169], 99.95th=[ 171], 00:28:01.874 | 99.99th=[ 174] 00:28:01.874 bw ( KiB/s): min=109568, max=196096, per=8.21%, avg=158187.35, stdev=24326.00, samples=20 00:28:01.874 iops : min= 428, max= 766, avg=617.85, stdev=94.99, samples=20 00:28:01.874 lat (msec) : 20=0.11%, 50=0.51%, 100=55.28%, 250=44.10% 00:28:01.874 cpu : usr=0.25%, sys=2.12%, ctx=1267, majf=0, minf=4097 00:28:01.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:28:01.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.874 issued rwts: total=6245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.874 job9: (groupid=0, jobs=1): err= 0: pid=103093: Wed May 15 13:46:12 2024 00:28:01.874 read: IOPS=751, BW=188MiB/s (197MB/s)(1902MiB/10128msec) 00:28:01.874 slat (usec): min=17, max=92729, avg=1278.63, stdev=4738.63 00:28:01.874 clat (msec): min=9, max=297, avg=83.76, stdev=36.90 00:28:01.874 lat (msec): min=9, max=297, avg=85.04, stdev=37.63 00:28:01.874 clat percentiles (msec): 00:28:01.874 | 1.00th=[ 17], 5.00th=[ 41], 10.00th=[ 53], 20.00th=[ 58], 00:28:01.874 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 87], 00:28:01.874 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 133], 95.00th=[ 161], 00:28:01.874 | 99.00th=[ 213], 99.50th=[ 218], 99.90th=[ 271], 99.95th=[ 271], 00:28:01.874 | 99.99th=[ 296] 00:28:01.874 bw ( KiB/s): min=81408, max=293376, per=10.02%, avg=193070.55, stdev=64972.15, samples=20 00:28:01.874 iops : min= 318, max= 1146, avg=754.05, stdev=253.80, samples=20 00:28:01.874 lat (msec) : 10=0.03%, 20=1.91%, 50=6.56%, 100=65.44%, 250=25.86% 00:28:01.874 lat (msec) : 500=0.21% 00:28:01.874 cpu : usr=0.21%, sys=2.34%, ctx=1481, majf=0, minf=4097 00:28:01.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:28:01.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.874 issued rwts: total=7607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.874 job10: (groupid=0, jobs=1): err= 0: pid=103094: Wed May 15 13:46:12 2024 00:28:01.874 read: IOPS=1050, BW=263MiB/s (275MB/s)(2633MiB/10025msec) 00:28:01.874 slat (usec): min=17, max=107236, avg=921.32, stdev=3587.54 00:28:01.874 clat (msec): min=12, max=235, avg=59.89, stdev=25.78 00:28:01.874 lat (msec): min=13, max=236, avg=60.82, stdev=26.24 00:28:01.874 clat percentiles (msec): 00:28:01.874 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 31], 20.00th=[ 37], 00:28:01.874 | 30.00th=[ 43], 40.00th=[ 49], 50.00th=[ 55], 60.00th=[ 62], 00:28:01.874 | 70.00th=[ 70], 80.00th=[ 89], 90.00th=[ 97], 95.00th=[ 105], 00:28:01.874 | 99.00th=[ 118], 99.50th=[ 140], 99.90th=[ 150], 99.95th=[ 155], 00:28:01.874 | 99.99th=[ 236] 00:28:01.874 bw ( KiB/s): min=163840, max=465408, per=13.90%, avg=267848.30, stdev=102239.18, samples=20 00:28:01.874 iops : min= 640, max= 1818, avg=1046.25, stdev=399.39, samples=20 00:28:01.874 lat (msec) : 20=0.65%, 50=41.92%, 100=50.01%, 250=7.43% 00:28:01.874 cpu : usr=0.33%, sys=3.23%, ctx=1880, majf=0, minf=4097 00:28:01.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:28:01.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:01.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:01.874 issued rwts: total=10530,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:01.874 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:01.874 00:28:01.874 Run status group 0 (all jobs): 00:28:01.874 READ: bw=1881MiB/s (1973MB/s), 147MiB/s-263MiB/s (154MB/s-275MB/s), io=18.6GiB (20.0GB), run=10025-10129msec 00:28:01.874 00:28:01.874 Disk stats (read/write): 00:28:01.874 nvme0n1: ios=11799/0, merge=0/0, ticks=1239277/0, in_queue=1239277, util=97.46% 00:28:01.874 nvme10n1: ios=13029/0, merge=0/0, ticks=1241395/0, in_queue=1241395, util=98.03% 00:28:01.874 nvme1n1: ios=12531/0, merge=0/0, ticks=1236674/0, in_queue=1236674, util=97.99% 00:28:01.874 nvme2n1: ios=13539/0, merge=0/0, ticks=1246640/0, in_queue=1246640, util=98.32% 00:28:01.874 nvme3n1: ios=13138/0, merge=0/0, ticks=1233147/0, in_queue=1233147, util=97.77% 00:28:01.874 nvme4n1: ios=12236/0, merge=0/0, ticks=1235727/0, in_queue=1235727, util=98.32% 00:28:01.874 nvme5n1: ios=11931/0, merge=0/0, ticks=1235364/0, in_queue=1235364, util=98.06% 00:28:01.874 nvme6n1: ios=14660/0, merge=0/0, ticks=1234808/0, in_queue=1234808, util=98.21% 00:28:01.874 nvme7n1: ios=12428/0, merge=0/0, ticks=1246563/0, in_queue=1246563, util=98.83% 00:28:01.874 nvme8n1: ios=15094/0, merge=0/0, ticks=1233063/0, in_queue=1233063, util=98.35% 00:28:01.874 nvme9n1: ios=20346/0, merge=0/0, ticks=1210073/0, in_queue=1210073, util=98.57% 00:28:01.874 13:46:12 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:28:01.874 [global] 00:28:01.874 thread=1 00:28:01.874 invalidate=1 00:28:01.874 rw=randwrite 00:28:01.874 time_based=1 00:28:01.874 runtime=10 00:28:01.874 ioengine=libaio 00:28:01.874 direct=1 00:28:01.874 bs=262144 00:28:01.874 iodepth=64 00:28:01.874 norandommap=1 00:28:01.874 numjobs=1 00:28:01.874 00:28:01.874 [job0] 00:28:01.874 filename=/dev/nvme0n1 00:28:01.874 [job1] 00:28:01.874 filename=/dev/nvme10n1 00:28:01.874 [job2] 00:28:01.874 filename=/dev/nvme1n1 00:28:01.874 [job3] 00:28:01.874 filename=/dev/nvme2n1 00:28:01.874 [job4] 00:28:01.874 filename=/dev/nvme3n1 00:28:01.874 [job5] 00:28:01.874 filename=/dev/nvme4n1 00:28:01.874 [job6] 00:28:01.874 filename=/dev/nvme5n1 00:28:01.874 [job7] 00:28:01.874 filename=/dev/nvme6n1 00:28:01.874 [job8] 00:28:01.874 filename=/dev/nvme7n1 00:28:01.874 [job9] 00:28:01.874 filename=/dev/nvme8n1 00:28:01.874 [job10] 00:28:01.874 filename=/dev/nvme9n1 00:28:01.874 Could not set queue depth (nvme0n1) 00:28:01.874 Could not set queue depth (nvme10n1) 00:28:01.874 Could not set queue depth (nvme1n1) 00:28:01.874 Could not set queue depth (nvme2n1) 00:28:01.874 Could not set queue depth (nvme3n1) 00:28:01.874 Could not set queue depth (nvme4n1) 00:28:01.874 Could not set queue depth (nvme5n1) 00:28:01.874 Could not set queue depth (nvme6n1) 00:28:01.874 Could not set queue depth (nvme7n1) 00:28:01.874 Could not set queue depth (nvme8n1) 00:28:01.875 Could not set queue depth (nvme9n1) 00:28:01.875 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:28:01.875 fio-3.35 00:28:01.875 Starting 11 threads 00:28:11.844 00:28:11.844 job0: (groupid=0, jobs=1): err= 0: pid=103292: Wed May 15 13:46:23 2024 00:28:11.844 write: IOPS=468, BW=117MiB/s (123MB/s)(1183MiB/10091msec); 0 zone resets 00:28:11.844 slat (usec): min=29, max=12167, avg=2073.64, stdev=3610.38 00:28:11.844 clat (msec): min=14, max=201, avg=134.35, stdev=18.62 00:28:11.844 lat (msec): min=14, max=201, avg=136.43, stdev=18.62 00:28:11.844 clat percentiles (msec): 00:28:11.844 | 1.00th=[ 77], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 115], 00:28:11.844 | 30.00th=[ 120], 40.00th=[ 138], 50.00th=[ 142], 60.00th=[ 146], 00:28:11.844 | 70.00th=[ 148], 80.00th=[ 148], 90.00th=[ 150], 95.00th=[ 150], 00:28:11.844 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 194], 99.95th=[ 194], 00:28:11.844 | 99.99th=[ 201] 00:28:11.844 bw ( KiB/s): min=106496, max=145408, per=8.38%, avg=119500.80, stdev=13658.23, samples=20 00:28:11.844 iops : min= 416, max= 568, avg=466.80, stdev=53.35, samples=20 00:28:11.844 lat (msec) : 20=0.11%, 50=0.42%, 100=1.33%, 250=98.14% 00:28:11.844 cpu : usr=1.44%, sys=1.58%, ctx=2171, majf=0, minf=1 00:28:11.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:11.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.844 issued rwts: total=0,4731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.844 job1: (groupid=0, jobs=1): err= 0: pid=103293: Wed May 15 13:46:23 2024 00:28:11.844 write: IOPS=689, BW=172MiB/s (181MB/s)(1735MiB/10073msec); 0 zone resets 00:28:11.844 slat (usec): min=22, max=18150, avg=1435.09, stdev=2450.76 00:28:11.844 clat (msec): min=5, max=191, avg=91.41, stdev=13.81 00:28:11.844 lat (msec): min=5, max=191, avg=92.84, stdev=13.83 00:28:11.844 clat percentiles (msec): 00:28:11.844 | 1.00th=[ 73], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 79], 00:28:11.844 | 30.00th=[ 83], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 96], 00:28:11.844 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 100], 95.00th=[ 101], 00:28:11.844 | 99.00th=[ 146], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:28:11.844 | 99.99th=[ 192] 00:28:11.844 bw ( KiB/s): min=104960, max=209920, per=12.35%, avg=176076.80, stdev=24161.50, samples=20 00:28:11.844 iops : min= 410, max= 820, avg=687.80, stdev=94.38, samples=20 00:28:11.844 lat (msec) : 10=0.03%, 20=0.04%, 50=0.12%, 100=94.35%, 250=5.46% 00:28:11.844 cpu : usr=1.75%, sys=1.85%, ctx=9512, majf=0, minf=1 00:28:11.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:11.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.844 issued rwts: total=0,6941,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.844 job2: (groupid=0, jobs=1): err= 0: pid=103305: Wed May 15 13:46:23 2024 00:28:11.844 write: IOPS=691, BW=173MiB/s (181MB/s)(1740MiB/10063msec); 0 zone resets 00:28:11.844 slat (usec): min=26, max=8960, avg=1413.20, stdev=2418.84 00:28:11.844 clat (msec): min=10, max=136, avg=91.08, stdev=12.71 00:28:11.844 lat (msec): min=10, max=136, avg=92.50, stdev=12.73 00:28:11.844 clat percentiles (msec): 00:28:11.844 | 1.00th=[ 55], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 78], 00:28:11.844 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 100], 00:28:11.844 | 70.00th=[ 101], 80.00th=[ 102], 90.00th=[ 103], 95.00th=[ 103], 00:28:11.844 | 99.00th=[ 106], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 136], 00:28:11.844 | 99.99th=[ 138] 00:28:11.844 bw ( KiB/s): min=160768, max=215040, per=12.39%, avg=176545.45, stdev=21537.35, samples=20 00:28:11.844 iops : min= 628, max= 840, avg=689.60, stdev=84.13, samples=20 00:28:11.844 lat (msec) : 20=0.07%, 50=0.80%, 100=65.46%, 250=33.66% 00:28:11.844 cpu : usr=1.99%, sys=1.65%, ctx=9293, majf=0, minf=1 00:28:11.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:11.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.844 issued rwts: total=0,6960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.844 job3: (groupid=0, jobs=1): err= 0: pid=103306: Wed May 15 13:46:23 2024 00:28:11.844 write: IOPS=465, BW=116MiB/s (122MB/s)(1176MiB/10103msec); 0 zone resets 00:28:11.844 slat (usec): min=23, max=24583, avg=2121.55, stdev=3663.27 00:28:11.844 clat (msec): min=7, max=212, avg=135.31, stdev=19.92 00:28:11.844 lat (msec): min=7, max=212, avg=137.43, stdev=19.92 00:28:11.844 clat percentiles (msec): 00:28:11.844 | 1.00th=[ 64], 5.00th=[ 108], 10.00th=[ 113], 20.00th=[ 115], 00:28:11.844 | 30.00th=[ 120], 40.00th=[ 140], 50.00th=[ 144], 60.00th=[ 146], 00:28:11.844 | 70.00th=[ 148], 80.00th=[ 148], 90.00th=[ 150], 95.00th=[ 155], 00:28:11.844 | 99.00th=[ 171], 99.50th=[ 176], 99.90th=[ 205], 99.95th=[ 205], 00:28:11.844 | 99.99th=[ 213] 00:28:11.844 bw ( KiB/s): min=104239, max=145699, per=8.34%, avg=118860.65, stdev=13987.05, samples=20 00:28:11.844 iops : min= 407, max= 569, avg=464.05, stdev=54.72, samples=20 00:28:11.844 lat (msec) : 10=0.17%, 20=0.17%, 50=0.53%, 100=0.49%, 250=98.64% 00:28:11.844 cpu : usr=1.37%, sys=1.33%, ctx=4651, majf=0, minf=1 00:28:11.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:11.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.844 issued rwts: total=0,4703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.844 job4: (groupid=0, jobs=1): err= 0: pid=103307: Wed May 15 13:46:23 2024 00:28:11.844 write: IOPS=689, BW=172MiB/s (181MB/s)(1734MiB/10059msec); 0 zone resets 00:28:11.844 slat (usec): min=17, max=58668, avg=1435.63, stdev=2529.03 00:28:11.844 clat (msec): min=55, max=215, avg=91.37, stdev=13.56 00:28:11.844 lat (msec): min=59, max=224, avg=92.81, stdev=13.54 00:28:11.844 clat percentiles (msec): 00:28:11.844 | 1.00th=[ 73], 5.00th=[ 74], 10.00th=[ 78], 20.00th=[ 80], 00:28:11.844 | 30.00th=[ 82], 40.00th=[ 92], 50.00th=[ 93], 60.00th=[ 97], 00:28:11.844 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 100], 95.00th=[ 100], 00:28:11.844 | 99.00th=[ 144], 99.50th=[ 161], 99.90th=[ 207], 99.95th=[ 215], 00:28:11.844 | 99.99th=[ 215] 00:28:11.844 bw ( KiB/s): min=99840, max=209920, per=12.34%, avg=175923.20, stdev=25048.32, samples=20 00:28:11.844 iops : min= 390, max= 820, avg=687.20, stdev=97.84, samples=20 00:28:11.844 lat (msec) : 100=96.03%, 250=3.97% 00:28:11.844 cpu : usr=1.53%, sys=2.19%, ctx=8834, majf=0, minf=1 00:28:11.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:11.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.844 issued rwts: total=0,6935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.844 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.844 job5: (groupid=0, jobs=1): err= 0: pid=103308: Wed May 15 13:46:23 2024 00:28:11.844 write: IOPS=466, BW=117MiB/s (122MB/s)(1177MiB/10090msec); 0 zone resets 00:28:11.844 slat (usec): min=28, max=28275, avg=2118.86, stdev=3641.56 00:28:11.845 clat (msec): min=31, max=198, avg=135.02, stdev=17.36 00:28:11.845 lat (msec): min=31, max=198, avg=137.14, stdev=17.28 00:28:11.845 clat percentiles (msec): 00:28:11.845 | 1.00th=[ 92], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 115], 00:28:11.845 | 30.00th=[ 120], 40.00th=[ 138], 50.00th=[ 142], 60.00th=[ 146], 00:28:11.845 | 70.00th=[ 148], 80.00th=[ 148], 90.00th=[ 150], 95.00th=[ 150], 00:28:11.845 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 190], 99.95th=[ 192], 00:28:11.845 | 99.99th=[ 199] 00:28:11.845 bw ( KiB/s): min=107008, max=145408, per=8.34%, avg=118897.90, stdev=13615.80, samples=20 00:28:11.845 iops : min= 418, max= 568, avg=464.40, stdev=53.20, samples=20 00:28:11.845 lat (msec) : 50=0.25%, 100=0.96%, 250=98.79% 00:28:11.845 cpu : usr=1.26%, sys=1.75%, ctx=5982, majf=0, minf=1 00:28:11.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:28:11.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.845 issued rwts: total=0,4707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.845 job6: (groupid=0, jobs=1): err= 0: pid=103309: Wed May 15 13:46:23 2024 00:28:11.845 write: IOPS=359, BW=90.0MiB/s (94.4MB/s)(914MiB/10154msec); 0 zone resets 00:28:11.845 slat (usec): min=25, max=21371, avg=2732.34, stdev=4739.76 00:28:11.845 clat (msec): min=25, max=295, avg=175.00, stdev=23.90 00:28:11.845 lat (msec): min=25, max=295, avg=177.73, stdev=23.81 00:28:11.845 clat percentiles (msec): 00:28:11.845 | 1.00th=[ 99], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 153], 00:28:11.845 | 30.00th=[ 161], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:28:11.845 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 205], 00:28:11.845 | 99.00th=[ 224], 99.50th=[ 249], 99.90th=[ 288], 99.95th=[ 296], 00:28:11.845 | 99.99th=[ 296] 00:28:11.845 bw ( KiB/s): min=79872, max=108544, per=6.45%, avg=91955.20, stdev=9177.59, samples=20 00:28:11.845 iops : min= 312, max= 424, avg=359.20, stdev=35.85, samples=20 00:28:11.845 lat (msec) : 50=0.41%, 100=0.66%, 250=98.44%, 500=0.49% 00:28:11.845 cpu : usr=1.05%, sys=0.98%, ctx=3369, majf=0, minf=1 00:28:11.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:28:11.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.845 issued rwts: total=0,3655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.845 job7: (groupid=0, jobs=1): err= 0: pid=103310: Wed May 15 13:46:23 2024 00:28:11.845 write: IOPS=690, BW=173MiB/s (181MB/s)(1739MiB/10067msec); 0 zone resets 00:28:11.845 slat (usec): min=25, max=9889, avg=1428.38, stdev=2423.61 00:28:11.845 clat (msec): min=6, max=141, avg=91.14, stdev=12.79 00:28:11.845 lat (msec): min=6, max=142, avg=92.57, stdev=12.81 00:28:11.845 clat percentiles (msec): 00:28:11.845 | 1.00th=[ 62], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 78], 00:28:11.845 | 30.00th=[ 81], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 100], 00:28:11.845 | 70.00th=[ 101], 80.00th=[ 102], 90.00th=[ 102], 95.00th=[ 103], 00:28:11.845 | 99.00th=[ 106], 99.50th=[ 106], 99.90th=[ 133], 99.95th=[ 138], 00:28:11.845 | 99.99th=[ 142] 00:28:11.845 bw ( KiB/s): min=161792, max=215552, per=12.38%, avg=176478.15, stdev=21956.49, samples=20 00:28:11.845 iops : min= 632, max= 842, avg=689.35, stdev=85.77, samples=20 00:28:11.845 lat (msec) : 10=0.03%, 20=0.27%, 50=0.52%, 100=63.28%, 250=35.90% 00:28:11.845 cpu : usr=2.09%, sys=2.13%, ctx=7742, majf=0, minf=1 00:28:11.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:28:11.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.845 issued rwts: total=0,6956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.845 job8: (groupid=0, jobs=1): err= 0: pid=103311: Wed May 15 13:46:23 2024 00:28:11.845 write: IOPS=360, BW=90.2MiB/s (94.5MB/s)(916MiB/10159msec); 0 zone resets 00:28:11.845 slat (usec): min=23, max=17458, avg=2724.70, stdev=4705.26 00:28:11.845 clat (msec): min=7, max=305, avg=174.64, stdev=24.73 00:28:11.845 lat (msec): min=7, max=305, avg=177.36, stdev=24.66 00:28:11.845 clat percentiles (msec): 00:28:11.845 | 1.00th=[ 85], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 153], 00:28:11.845 | 30.00th=[ 161], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:28:11.845 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 205], 00:28:11.845 | 99.00th=[ 224], 99.50th=[ 259], 99.90th=[ 296], 99.95th=[ 305], 00:28:11.845 | 99.99th=[ 305] 00:28:11.845 bw ( KiB/s): min=77979, max=108761, per=6.47%, avg=92237.75, stdev=9330.48, samples=20 00:28:11.845 iops : min= 304, max= 424, avg=359.95, stdev=36.25, samples=20 00:28:11.845 lat (msec) : 10=0.08%, 20=0.16%, 50=0.27%, 100=0.66%, 250=98.23% 00:28:11.845 lat (msec) : 500=0.60% 00:28:11.845 cpu : usr=0.87%, sys=1.15%, ctx=4798, majf=0, minf=1 00:28:11.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:28:11.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.845 issued rwts: total=0,3664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.845 job9: (groupid=0, jobs=1): err= 0: pid=103312: Wed May 15 13:46:23 2024 00:28:11.845 write: IOPS=357, BW=89.4MiB/s (93.7MB/s)(907MiB/10149msec); 0 zone resets 00:28:11.845 slat (usec): min=24, max=86010, avg=2751.06, stdev=4917.03 00:28:11.845 clat (msec): min=88, max=291, avg=176.09, stdev=20.65 00:28:11.845 lat (msec): min=88, max=291, avg=178.84, stdev=20.38 00:28:11.845 clat percentiles (msec): 00:28:11.845 | 1.00th=[ 138], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 153], 00:28:11.845 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:28:11.845 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 192], 95.00th=[ 207], 00:28:11.845 | 99.00th=[ 226], 99.50th=[ 243], 99.90th=[ 284], 99.95th=[ 292], 00:28:11.845 | 99.99th=[ 292] 00:28:11.845 bw ( KiB/s): min=79872, max=108544, per=6.40%, avg=91289.60, stdev=9165.11, samples=20 00:28:11.845 iops : min= 312, max= 424, avg=356.60, stdev=35.80, samples=20 00:28:11.845 lat (msec) : 100=0.11%, 250=99.39%, 500=0.50% 00:28:11.845 cpu : usr=0.78%, sys=1.26%, ctx=4439, majf=0, minf=1 00:28:11.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:28:11.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.845 issued rwts: total=0,3629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.845 job10: (groupid=0, jobs=1): err= 0: pid=103313: Wed May 15 13:46:23 2024 00:28:11.845 write: IOPS=362, BW=90.6MiB/s (95.1MB/s)(920MiB/10152msec); 0 zone resets 00:28:11.845 slat (usec): min=23, max=54099, avg=2677.89, stdev=4760.51 00:28:11.845 clat (msec): min=14, max=300, avg=173.71, stdev=26.04 00:28:11.845 lat (msec): min=14, max=300, avg=176.39, stdev=26.05 00:28:11.845 clat percentiles (msec): 00:28:11.845 | 1.00th=[ 63], 5.00th=[ 140], 10.00th=[ 146], 20.00th=[ 153], 00:28:11.845 | 30.00th=[ 159], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:28:11.845 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 194], 95.00th=[ 207], 00:28:11.845 | 99.00th=[ 224], 99.50th=[ 253], 99.90th=[ 292], 99.95th=[ 300], 00:28:11.845 | 99.99th=[ 300] 00:28:11.845 bw ( KiB/s): min=78336, max=110592, per=6.50%, avg=92620.80, stdev=9721.47, samples=20 00:28:11.845 iops : min= 306, max= 432, avg=361.80, stdev=37.97, samples=20 00:28:11.845 lat (msec) : 20=0.05%, 50=0.33%, 100=1.20%, 250=97.83%, 500=0.60% 00:28:11.845 cpu : usr=0.95%, sys=1.12%, ctx=4771, majf=0, minf=1 00:28:11.845 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:28:11.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:28:11.845 issued rwts: total=0,3681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.845 latency : target=0, window=0, percentile=100.00%, depth=64 00:28:11.845 00:28:11.845 Run status group 0 (all jobs): 00:28:11.845 WRITE: bw=1392MiB/s (1460MB/s), 89.4MiB/s-173MiB/s (93.7MB/s-181MB/s), io=13.8GiB (14.8GB), run=10059-10159msec 00:28:11.845 00:28:11.845 Disk stats (read/write): 00:28:11.845 nvme0n1: ios=49/9312, merge=0/0, ticks=52/1213569, in_queue=1213621, util=97.65% 00:28:11.845 nvme10n1: ios=49/13741, merge=0/0, ticks=36/1216712, in_queue=1216748, util=97.92% 00:28:11.845 nvme1n1: ios=33/13767, merge=0/0, ticks=31/1216558, in_queue=1216589, util=97.96% 00:28:11.845 nvme2n1: ios=5/9283, merge=0/0, ticks=5/1216812, in_queue=1216817, util=98.09% 00:28:11.845 nvme3n1: ios=21/13691, merge=0/0, ticks=52/1214466, in_queue=1214518, util=97.91% 00:28:11.845 nvme4n1: ios=0/9261, merge=0/0, ticks=0/1213384, in_queue=1213384, util=98.08% 00:28:11.845 nvme5n1: ios=18/7170, merge=0/0, ticks=141/1212552, in_queue=1212693, util=98.38% 00:28:11.845 nvme6n1: ios=0/13777, merge=0/0, ticks=0/1216303, in_queue=1216303, util=98.42% 00:28:11.845 nvme7n1: ios=0/7206, merge=0/0, ticks=0/1214733, in_queue=1214733, util=98.80% 00:28:11.845 nvme8n1: ios=0/7112, merge=0/0, ticks=0/1211574, in_queue=1211574, util=98.64% 00:28:11.845 nvme9n1: ios=0/7227, merge=0/0, ticks=0/1213219, in_queue=1213219, util=98.82% 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:11.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.845 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:28:11.846 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.846 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:28:11.847 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.847 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:28:12.106 13:46:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:12.107 13:46:24 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:12.107 rmmod nvme_tcp 00:28:12.107 rmmod nvme_fabrics 00:28:12.107 rmmod nvme_keyring 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 102611 ']' 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 102611 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 102611 ']' 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 102611 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102611 00:28:12.107 killing process with pid 102611 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102611' 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 102611 00:28:12.107 [2024-05-15 13:46:25.059104] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:12.107 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 102611 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:12.673 00:28:12.673 real 0m49.653s 00:28:12.673 user 2m47.226s 00:28:12.673 sys 0m24.691s 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:12.673 13:46:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:28:12.673 ************************************ 00:28:12.673 END TEST nvmf_multiconnection 00:28:12.673 ************************************ 00:28:12.673 13:46:25 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:12.673 13:46:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:12.673 13:46:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:12.673 13:46:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.673 ************************************ 00:28:12.673 START TEST nvmf_initiator_timeout 00:28:12.673 ************************************ 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:28:12.673 * Looking for test storage... 00:28:12.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.673 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.674 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:12.932 Cannot find device "nvmf_tgt_br" 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:12.932 Cannot find device "nvmf_tgt_br2" 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:12.932 Cannot find device "nvmf_tgt_br" 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:12.932 Cannot find device "nvmf_tgt_br2" 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:12.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:12.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:12.932 13:46:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:12.932 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:12.932 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:12.932 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:12.932 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:12.932 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:12.932 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:13.190 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:13.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:28:13.190 00:28:13.190 --- 10.0.0.2 ping statistics --- 00:28:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.191 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:13.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:13.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:28:13.191 00:28:13.191 --- 10.0.0.3 ping statistics --- 00:28:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.191 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:13.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:28:13.191 00:28:13.191 --- 10.0.0.1 ping statistics --- 00:28:13.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.191 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=103681 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 103681 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 103681 ']' 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:13.191 13:46:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:13.191 [2024-05-15 13:46:26.225018] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:28:13.191 [2024-05-15 13:46:26.225159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.449 [2024-05-15 13:46:26.350687] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:13.449 [2024-05-15 13:46:26.367740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.449 [2024-05-15 13:46:26.463333] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.449 [2024-05-15 13:46:26.463636] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.449 [2024-05-15 13:46:26.463765] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.449 [2024-05-15 13:46:26.463820] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.449 [2024-05-15 13:46:26.463919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.449 [2024-05-15 13:46:26.464062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.449 [2024-05-15 13:46:26.464155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.449 [2024-05-15 13:46:26.464301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.449 [2024-05-15 13:46:26.464311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.383 Malloc0 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.383 Delay0 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.383 [2024-05-15 13:46:27.370853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:14.383 [2024-05-15 13:46:27.398806] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:14.383 [2024-05-15 13:46:27.399174] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:14.383 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:14.641 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:28:14.641 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:28:14.641 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:28:14.641 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:28:14.641 13:46:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=103763 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:28:16.542 13:46:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:28:16.542 [global] 00:28:16.542 thread=1 00:28:16.542 invalidate=1 00:28:16.542 rw=write 00:28:16.542 time_based=1 00:28:16.542 runtime=60 00:28:16.542 ioengine=libaio 00:28:16.542 direct=1 00:28:16.542 bs=4096 00:28:16.542 iodepth=1 00:28:16.542 norandommap=0 00:28:16.542 numjobs=1 00:28:16.542 00:28:16.542 verify_dump=1 00:28:16.542 verify_backlog=512 00:28:16.542 verify_state_save=0 00:28:16.542 do_verify=1 00:28:16.542 verify=crc32c-intel 00:28:16.542 [job0] 00:28:16.542 filename=/dev/nvme0n1 00:28:16.542 Could not set queue depth (nvme0n1) 00:28:16.800 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:16.800 fio-3.35 00:28:16.800 Starting 1 thread 00:28:20.185 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:28:20.185 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.185 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.185 true 00:28:20.185 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.185 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:28:20.185 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.185 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.185 true 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.186 true 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:20.186 true 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.186 13:46:32 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.716 true 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.716 true 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.716 true 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:22.716 true 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:22.716 13:46:35 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 103763 00:29:18.926 00:29:18.926 job0: (groupid=0, jobs=1): err= 0: pid=103790: Wed May 15 13:47:29 2024 00:29:18.926 read: IOPS=775, BW=3103KiB/s (3178kB/s)(182MiB/60000msec) 00:29:18.926 slat (usec): min=13, max=11560, avg=17.45, stdev=64.24 00:29:18.926 clat (usec): min=163, max=40570k, avg=1080.23, stdev=188044.68 00:29:18.926 lat (usec): min=178, max=40570k, avg=1097.69, stdev=188044.72 00:29:18.926 clat percentiles (usec): 00:29:18.926 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 190], 00:29:18.926 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 210], 00:29:18.926 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 233], 95.00th=[ 253], 00:29:18.926 | 99.00th=[ 314], 99.50th=[ 334], 99.90th=[ 562], 99.95th=[ 676], 00:29:18.926 | 99.99th=[ 1762] 00:29:18.926 write: IOPS=776, BW=3106KiB/s (3181kB/s)(182MiB/60000msec); 0 zone resets 00:29:18.926 slat (usec): min=18, max=659, avg=24.56, stdev= 6.53 00:29:18.926 clat (usec): min=127, max=1788, avg=162.51, stdev=29.18 00:29:18.926 lat (usec): min=150, max=1812, avg=187.07, stdev=31.24 00:29:18.926 clat percentiles (usec): 00:29:18.926 | 1.00th=[ 139], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:29:18.926 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:29:18.926 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 202], 00:29:18.926 | 99.00th=[ 241], 99.50th=[ 265], 99.90th=[ 420], 99.95th=[ 562], 00:29:18.926 | 99.99th=[ 1254] 00:29:18.926 bw ( KiB/s): min= 4096, max=11568, per=100.00%, avg=9346.85, stdev=1626.53, samples=39 00:29:18.926 iops : min= 1024, max= 2892, avg=2336.69, stdev=406.64, samples=39 00:29:18.926 lat (usec) : 250=96.97%, 500=2.93%, 750=0.07%, 1000=0.02% 00:29:18.926 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:29:18.926 cpu : usr=0.58%, sys=2.42%, ctx=93151, majf=0, minf=2 00:29:18.926 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:18.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.926 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:18.926 issued rwts: total=46547,46592,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:18.926 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:18.926 00:29:18.926 Run status group 0 (all jobs): 00:29:18.926 READ: bw=3103KiB/s (3178kB/s), 3103KiB/s-3103KiB/s (3178kB/s-3178kB/s), io=182MiB (191MB), run=60000-60000msec 00:29:18.926 WRITE: bw=3106KiB/s (3181kB/s), 3106KiB/s-3106KiB/s (3181kB/s-3181kB/s), io=182MiB (191MB), run=60000-60000msec 00:29:18.926 00:29:18.926 Disk stats (read/write): 00:29:18.926 nvme0n1: ios=46367/46592, merge=0/0, ticks=9940/8133, in_queue=18073, util=99.65% 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:18.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:29:18.926 nvmf hotplug test: fio successful as expected 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:18.926 13:47:29 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:18.926 rmmod nvme_tcp 00:29:18.926 rmmod nvme_fabrics 00:29:18.926 rmmod nvme_keyring 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 103681 ']' 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 103681 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 103681 ']' 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 103681 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 103681 00:29:18.926 killing process with pid 103681 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 103681' 00:29:18.926 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 103681 00:29:18.926 [2024-05-15 13:47:30.044864] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 103681 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:18.927 00:29:18.927 real 1m4.672s 00:29:18.927 user 4m6.003s 00:29:18.927 sys 0m9.061s 00:29:18.927 ************************************ 00:29:18.927 END TEST nvmf_initiator_timeout 00:29:18.927 ************************************ 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:18.927 13:47:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 13:47:30 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ virt == phy ]] 00:29:18.927 13:47:30 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:29:18.927 13:47:30 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.927 13:47:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 13:47:30 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:29:18.927 13:47:30 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:18.927 13:47:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 13:47:30 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:29:18.927 13:47:30 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:18.927 13:47:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:18.927 13:47:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:18.927 13:47:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.927 ************************************ 00:29:18.927 START TEST nvmf_multicontroller 00:29:18.927 ************************************ 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:18.927 * Looking for test storage... 00:29:18.927 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:18.927 Cannot find device "nvmf_tgt_br" 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:18.927 Cannot find device "nvmf_tgt_br2" 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:29:18.927 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:18.928 Cannot find device "nvmf_tgt_br" 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:18.928 Cannot find device "nvmf_tgt_br2" 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:18.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:18.928 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:18.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:18.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:29:18.928 00:29:18.928 --- 10.0.0.2 ping statistics --- 00:29:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.928 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:18.928 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:18.928 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:29:18.928 00:29:18.928 --- 10.0.0.3 ping statistics --- 00:29:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.928 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:18.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:18.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:29:18.928 00:29:18.928 --- 10.0.0.1 ping statistics --- 00:29:18.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:18.928 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=104597 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 104597 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 104597 ']' 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:18.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:18.928 13:47:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.928 [2024-05-15 13:47:30.961786] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:18.928 [2024-05-15 13:47:30.961878] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:18.928 [2024-05-15 13:47:31.086340] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:18.928 [2024-05-15 13:47:31.104518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:18.928 [2024-05-15 13:47:31.199055] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:18.928 [2024-05-15 13:47:31.199122] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:18.928 [2024-05-15 13:47:31.199134] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:18.928 [2024-05-15 13:47:31.199143] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:18.928 [2024-05-15 13:47:31.199151] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:18.928 [2024-05-15 13:47:31.199856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:18.928 [2024-05-15 13:47:31.199941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:18.928 [2024-05-15 13:47:31.199945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.928 13:47:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:18.928 [2024-05-15 13:47:32.001722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.928 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:18.928 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:18.928 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:18.928 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 Malloc0 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 [2024-05-15 13:47:32.072735] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:19.187 [2024-05-15 13:47:32.073006] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 [2024-05-15 13:47:32.080862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 Malloc1 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=104648 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 104648 /var/tmp/bdevperf.sock 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 104648 ']' 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:19.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:19.187 13:47:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.122 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:20.122 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:29:20.122 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:20.122 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.122 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.382 NVMe0n1 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.382 1 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.382 2024/05/15 13:47:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:29:20.382 request: 00:29:20.382 { 00:29:20.382 "method": "bdev_nvme_attach_controller", 00:29:20.382 "params": { 00:29:20.382 "name": "NVMe0", 00:29:20.382 "trtype": "tcp", 00:29:20.382 "traddr": "10.0.0.2", 00:29:20.382 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:20.382 "hostaddr": "10.0.0.2", 00:29:20.382 "hostsvcid": "60000", 00:29:20.382 "adrfam": "ipv4", 00:29:20.382 "trsvcid": "4420", 00:29:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:29:20.382 } 00:29:20.382 } 00:29:20.382 Got JSON-RPC error response 00:29:20.382 GoRPCClient: error on JSON-RPC call 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.382 2024/05/15 13:47:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:29:20.382 request: 00:29:20.382 { 00:29:20.382 "method": "bdev_nvme_attach_controller", 00:29:20.382 "params": { 00:29:20.382 "name": "NVMe0", 00:29:20.382 "trtype": "tcp", 00:29:20.382 "traddr": "10.0.0.2", 00:29:20.382 "hostaddr": "10.0.0.2", 00:29:20.382 "hostsvcid": "60000", 00:29:20.382 "adrfam": "ipv4", 00:29:20.382 "trsvcid": "4420", 00:29:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:29:20.382 } 00:29:20.382 } 00:29:20.382 Got JSON-RPC error response 00:29:20.382 GoRPCClient: error on JSON-RPC call 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.382 2024/05/15 13:47:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:29:20.382 request: 00:29:20.382 { 00:29:20.382 "method": "bdev_nvme_attach_controller", 00:29:20.382 "params": { 00:29:20.382 "name": "NVMe0", 00:29:20.382 "trtype": "tcp", 00:29:20.382 "traddr": "10.0.0.2", 00:29:20.382 "hostaddr": "10.0.0.2", 00:29:20.382 "hostsvcid": "60000", 00:29:20.382 "adrfam": "ipv4", 00:29:20.382 "trsvcid": "4420", 00:29:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.382 "multipath": "disable" 00:29:20.382 } 00:29:20.382 } 00:29:20.382 Got JSON-RPC error response 00:29:20.382 GoRPCClient: error on JSON-RPC call 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:20.382 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.383 2024/05/15 13:47:33 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:29:20.383 request: 00:29:20.383 { 00:29:20.383 "method": "bdev_nvme_attach_controller", 00:29:20.383 "params": { 00:29:20.383 "name": "NVMe0", 00:29:20.383 "trtype": "tcp", 00:29:20.383 "traddr": "10.0.0.2", 00:29:20.383 "hostaddr": "10.0.0.2", 00:29:20.383 "hostsvcid": "60000", 00:29:20.383 "adrfam": "ipv4", 00:29:20.383 "trsvcid": "4420", 00:29:20.383 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.383 "multipath": "failover" 00:29:20.383 } 00:29:20.383 } 00:29:20.383 Got JSON-RPC error response 00:29:20.383 GoRPCClient: error on JSON-RPC call 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.383 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.383 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.640 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:20.640 13:47:33 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:21.571 0 00:29:21.571 13:47:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:21.571 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.571 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 104648 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 104648 ']' 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 104648 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104648 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:21.828 killing process with pid 104648 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104648' 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 104648 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 104648 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:21.828 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:29:22.086 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:29:22.086 [2024-05-15 13:47:32.201995] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:22.086 [2024-05-15 13:47:32.202112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104648 ] 00:29:22.086 [2024-05-15 13:47:32.324540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:22.086 [2024-05-15 13:47:32.342818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.086 [2024-05-15 13:47:32.440601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.086 [2024-05-15 13:47:33.512129] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 5b8a09ff-524c-432f-b233-5c667b959abd already exists 00:29:22.086 [2024-05-15 13:47:33.512200] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:5b8a09ff-524c-432f-b233-5c667b959abd alias for bdev NVMe1n1 00:29:22.086 [2024-05-15 13:47:33.512223] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:22.086 Running I/O for 1 seconds... 00:29:22.086 00:29:22.086 Latency(us) 00:29:22.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.086 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:22.086 NVMe0n1 : 1.01 19517.07 76.24 0.00 0.00 6547.43 3798.11 13345.51 00:29:22.086 =================================================================================================================== 00:29:22.086 Total : 19517.07 76.24 0.00 0.00 6547.43 3798.11 13345.51 00:29:22.086 Received shutdown signal, test time was about 1.000000 seconds 00:29:22.086 00:29:22.086 Latency(us) 00:29:22.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.086 =================================================================================================================== 00:29:22.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:22.086 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:22.086 13:47:34 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:22.086 rmmod nvme_tcp 00:29:22.086 rmmod nvme_fabrics 00:29:22.086 rmmod nvme_keyring 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 104597 ']' 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 104597 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 104597 ']' 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 104597 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104597 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:29:22.086 killing process with pid 104597 00:29:22.086 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104597' 00:29:22.087 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 104597 00:29:22.087 [2024-05-15 13:47:35.076344] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:22.087 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 104597 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:22.345 00:29:22.345 real 0m4.948s 00:29:22.345 user 0m15.564s 00:29:22.345 sys 0m1.080s 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:22.345 13:47:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:22.345 ************************************ 00:29:22.345 END TEST nvmf_multicontroller 00:29:22.345 ************************************ 00:29:22.345 13:47:35 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:22.345 13:47:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:22.345 13:47:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:22.345 13:47:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:22.345 ************************************ 00:29:22.345 START TEST nvmf_aer 00:29:22.345 ************************************ 00:29:22.345 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:22.603 * Looking for test storage... 00:29:22.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.603 13:47:35 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:22.604 Cannot find device "nvmf_tgt_br" 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:22.604 Cannot find device "nvmf_tgt_br2" 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:22.604 Cannot find device "nvmf_tgt_br" 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:22.604 Cannot find device "nvmf_tgt_br2" 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:22.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:22.604 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:22.604 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:22.862 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:22.862 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:22.862 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:22.862 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:22.862 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:22.862 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:22.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:22.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:29:22.863 00:29:22.863 --- 10.0.0.2 ping statistics --- 00:29:22.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.863 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:22.863 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:22.863 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:29:22.863 00:29:22.863 --- 10.0.0.3 ping statistics --- 00:29:22.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.863 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:22.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:22.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:29:22.863 00:29:22.863 --- 10.0.0.1 ping statistics --- 00:29:22.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:22.863 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=104887 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 104887 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 104887 ']' 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:22.863 13:47:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:22.863 [2024-05-15 13:47:35.904962] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:22.863 [2024-05-15 13:47:35.905059] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.121 [2024-05-15 13:47:36.029508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:23.121 [2024-05-15 13:47:36.044851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.121 [2024-05-15 13:47:36.150144] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.121 [2024-05-15 13:47:36.150221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.121 [2024-05-15 13:47:36.150235] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.121 [2024-05-15 13:47:36.150244] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.121 [2024-05-15 13:47:36.150252] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.121 [2024-05-15 13:47:36.150654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.121 [2024-05-15 13:47:36.150805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.121 [2024-05-15 13:47:36.150861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.121 [2024-05-15 13:47:36.150869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.054 [2024-05-15 13:47:36.961208] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.054 Malloc0 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.054 13:47:36 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.054 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.054 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.054 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.054 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.054 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.054 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:24.054 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.055 [2024-05-15 13:47:37.022822] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:24.055 [2024-05-15 13:47:37.023221] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.055 [ 00:29:24.055 { 00:29:24.055 "allow_any_host": true, 00:29:24.055 "hosts": [], 00:29:24.055 "listen_addresses": [], 00:29:24.055 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.055 "subtype": "Discovery" 00:29:24.055 }, 00:29:24.055 { 00:29:24.055 "allow_any_host": true, 00:29:24.055 "hosts": [], 00:29:24.055 "listen_addresses": [ 00:29:24.055 { 00:29:24.055 "adrfam": "IPv4", 00:29:24.055 "traddr": "10.0.0.2", 00:29:24.055 "trsvcid": "4420", 00:29:24.055 "trtype": "TCP" 00:29:24.055 } 00:29:24.055 ], 00:29:24.055 "max_cntlid": 65519, 00:29:24.055 "max_namespaces": 2, 00:29:24.055 "min_cntlid": 1, 00:29:24.055 "model_number": "SPDK bdev Controller", 00:29:24.055 "namespaces": [ 00:29:24.055 { 00:29:24.055 "bdev_name": "Malloc0", 00:29:24.055 "name": "Malloc0", 00:29:24.055 "nguid": "E68451EF7BE54DE6ABA8C7D92C954229", 00:29:24.055 "nsid": 1, 00:29:24.055 "uuid": "e68451ef-7be5-4de6-aba8-c7d92c954229" 00:29:24.055 } 00:29:24.055 ], 00:29:24.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.055 "serial_number": "SPDK00000000000001", 00:29:24.055 "subtype": "NVMe" 00:29:24.055 } 00:29:24.055 ] 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=104946 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:29:24.055 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.313 Malloc1 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.313 Asynchronous Event Request test 00:29:24.313 Attaching to 10.0.0.2 00:29:24.313 Attached to 10.0.0.2 00:29:24.313 Registering asynchronous event callbacks... 00:29:24.313 Starting namespace attribute notice tests for all controllers... 00:29:24.313 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:24.313 aer_cb - Changed Namespace 00:29:24.313 Cleaning up... 00:29:24.313 [ 00:29:24.313 { 00:29:24.313 "allow_any_host": true, 00:29:24.313 "hosts": [], 00:29:24.313 "listen_addresses": [], 00:29:24.313 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:24.313 "subtype": "Discovery" 00:29:24.313 }, 00:29:24.313 { 00:29:24.313 "allow_any_host": true, 00:29:24.313 "hosts": [], 00:29:24.313 "listen_addresses": [ 00:29:24.313 { 00:29:24.313 "adrfam": "IPv4", 00:29:24.313 "traddr": "10.0.0.2", 00:29:24.313 "trsvcid": "4420", 00:29:24.313 "trtype": "TCP" 00:29:24.313 } 00:29:24.313 ], 00:29:24.313 "max_cntlid": 65519, 00:29:24.313 "max_namespaces": 2, 00:29:24.313 "min_cntlid": 1, 00:29:24.313 "model_number": "SPDK bdev Controller", 00:29:24.313 "namespaces": [ 00:29:24.313 { 00:29:24.313 "bdev_name": "Malloc0", 00:29:24.313 "name": "Malloc0", 00:29:24.313 "nguid": "E68451EF7BE54DE6ABA8C7D92C954229", 00:29:24.313 "nsid": 1, 00:29:24.313 "uuid": "e68451ef-7be5-4de6-aba8-c7d92c954229" 00:29:24.313 }, 00:29:24.313 { 00:29:24.313 "bdev_name": "Malloc1", 00:29:24.313 "name": "Malloc1", 00:29:24.313 "nguid": "56D305DE465A4939BEB1BDDA1FF8EF90", 00:29:24.313 "nsid": 2, 00:29:24.313 "uuid": "56d305de-465a-4939-beb1-bdda1ff8ef90" 00:29:24.313 } 00:29:24.313 ], 00:29:24.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:24.313 "serial_number": "SPDK00000000000001", 00:29:24.313 "subtype": "NVMe" 00:29:24.313 } 00:29:24.313 ] 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 104946 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:24.313 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:24.573 rmmod nvme_tcp 00:29:24.573 rmmod nvme_fabrics 00:29:24.573 rmmod nvme_keyring 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 104887 ']' 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 104887 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 104887 ']' 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 104887 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 104887 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:24.573 killing process with pid 104887 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 104887' 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 104887 00:29:24.573 [2024-05-15 13:47:37.541821] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:24.573 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 104887 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:24.832 00:29:24.832 real 0m2.388s 00:29:24.832 user 0m6.659s 00:29:24.832 sys 0m0.668s 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:24.832 13:47:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:24.832 ************************************ 00:29:24.832 END TEST nvmf_aer 00:29:24.832 ************************************ 00:29:24.832 13:47:37 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:24.832 13:47:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:24.832 13:47:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:24.832 13:47:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:24.832 ************************************ 00:29:24.832 START TEST nvmf_async_init 00:29:24.832 ************************************ 00:29:24.832 13:47:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:25.091 * Looking for test storage... 00:29:25.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:25.091 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7ee657be2e154eb1b251aced0493862b 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:25.092 13:47:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:25.092 Cannot find device "nvmf_tgt_br" 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:25.092 Cannot find device "nvmf_tgt_br2" 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:25.092 Cannot find device "nvmf_tgt_br" 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:25.092 Cannot find device "nvmf_tgt_br2" 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:25.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:25.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:25.092 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:25.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:29:25.350 00:29:25.350 --- 10.0.0.2 ping statistics --- 00:29:25.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.350 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:25.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:25.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:29:25.350 00:29:25.350 --- 10.0.0.3 ping statistics --- 00:29:25.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.350 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:25.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:29:25.350 00:29:25.350 --- 10.0.0.1 ping statistics --- 00:29:25.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.350 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:25.350 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=105116 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 105116 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 105116 ']' 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:25.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:25.351 13:47:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:25.351 [2024-05-15 13:47:38.394044] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:25.351 [2024-05-15 13:47:38.394158] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.608 [2024-05-15 13:47:38.519385] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:25.608 [2024-05-15 13:47:38.536196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.608 [2024-05-15 13:47:38.640585] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.608 [2024-05-15 13:47:38.640651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.608 [2024-05-15 13:47:38.640666] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.608 [2024-05-15 13:47:38.640677] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.608 [2024-05-15 13:47:38.640686] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.608 [2024-05-15 13:47:38.640716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.541 [2024-05-15 13:47:39.488317] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.541 null0 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.541 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7ee657be2e154eb1b251aced0493862b 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.542 [2024-05-15 13:47:39.528276] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:26.542 [2024-05-15 13:47:39.528481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.542 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.829 nvme0n1 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.829 [ 00:29:26.829 { 00:29:26.829 "aliases": [ 00:29:26.829 "7ee657be-2e15-4eb1-b251-aced0493862b" 00:29:26.829 ], 00:29:26.829 "assigned_rate_limits": { 00:29:26.829 "r_mbytes_per_sec": 0, 00:29:26.829 "rw_ios_per_sec": 0, 00:29:26.829 "rw_mbytes_per_sec": 0, 00:29:26.829 "w_mbytes_per_sec": 0 00:29:26.829 }, 00:29:26.829 "block_size": 512, 00:29:26.829 "claimed": false, 00:29:26.829 "driver_specific": { 00:29:26.829 "mp_policy": "active_passive", 00:29:26.829 "nvme": [ 00:29:26.829 { 00:29:26.829 "ctrlr_data": { 00:29:26.829 "ana_reporting": false, 00:29:26.829 "cntlid": 1, 00:29:26.829 "firmware_revision": "24.05", 00:29:26.829 "model_number": "SPDK bdev Controller", 00:29:26.829 "multi_ctrlr": true, 00:29:26.829 "oacs": { 00:29:26.829 "firmware": 0, 00:29:26.829 "format": 0, 00:29:26.829 "ns_manage": 0, 00:29:26.829 "security": 0 00:29:26.829 }, 00:29:26.829 "serial_number": "00000000000000000000", 00:29:26.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.829 "vendor_id": "0x8086" 00:29:26.829 }, 00:29:26.829 "ns_data": { 00:29:26.829 "can_share": true, 00:29:26.829 "id": 1 00:29:26.829 }, 00:29:26.829 "trid": { 00:29:26.829 "adrfam": "IPv4", 00:29:26.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:26.829 "traddr": "10.0.0.2", 00:29:26.829 "trsvcid": "4420", 00:29:26.829 "trtype": "TCP" 00:29:26.829 }, 00:29:26.829 "vs": { 00:29:26.829 "nvme_version": "1.3" 00:29:26.829 } 00:29:26.829 } 00:29:26.829 ] 00:29:26.829 }, 00:29:26.829 "memory_domains": [ 00:29:26.829 { 00:29:26.829 "dma_device_id": "system", 00:29:26.829 "dma_device_type": 1 00:29:26.829 } 00:29:26.829 ], 00:29:26.829 "name": "nvme0n1", 00:29:26.829 "num_blocks": 2097152, 00:29:26.829 "product_name": "NVMe disk", 00:29:26.829 "supported_io_types": { 00:29:26.829 "abort": true, 00:29:26.829 "compare": true, 00:29:26.829 "compare_and_write": true, 00:29:26.829 "flush": true, 00:29:26.829 "nvme_admin": true, 00:29:26.829 "nvme_io": true, 00:29:26.829 "read": true, 00:29:26.829 "reset": true, 00:29:26.829 "unmap": false, 00:29:26.829 "write": true, 00:29:26.829 "write_zeroes": true 00:29:26.829 }, 00:29:26.829 "uuid": "7ee657be-2e15-4eb1-b251-aced0493862b", 00:29:26.829 "zoned": false 00:29:26.829 } 00:29:26.829 ] 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.829 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:26.829 [2024-05-15 13:47:39.792861] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:26.829 [2024-05-15 13:47:39.792979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1900920 (9): Bad file descriptor 00:29:27.092 [2024-05-15 13:47:39.924808] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.092 [ 00:29:27.092 { 00:29:27.092 "aliases": [ 00:29:27.092 "7ee657be-2e15-4eb1-b251-aced0493862b" 00:29:27.092 ], 00:29:27.092 "assigned_rate_limits": { 00:29:27.092 "r_mbytes_per_sec": 0, 00:29:27.092 "rw_ios_per_sec": 0, 00:29:27.092 "rw_mbytes_per_sec": 0, 00:29:27.092 "w_mbytes_per_sec": 0 00:29:27.092 }, 00:29:27.092 "block_size": 512, 00:29:27.092 "claimed": false, 00:29:27.092 "driver_specific": { 00:29:27.092 "mp_policy": "active_passive", 00:29:27.092 "nvme": [ 00:29:27.092 { 00:29:27.092 "ctrlr_data": { 00:29:27.092 "ana_reporting": false, 00:29:27.092 "cntlid": 2, 00:29:27.092 "firmware_revision": "24.05", 00:29:27.092 "model_number": "SPDK bdev Controller", 00:29:27.092 "multi_ctrlr": true, 00:29:27.092 "oacs": { 00:29:27.092 "firmware": 0, 00:29:27.092 "format": 0, 00:29:27.092 "ns_manage": 0, 00:29:27.092 "security": 0 00:29:27.092 }, 00:29:27.092 "serial_number": "00000000000000000000", 00:29:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.092 "vendor_id": "0x8086" 00:29:27.092 }, 00:29:27.092 "ns_data": { 00:29:27.092 "can_share": true, 00:29:27.092 "id": 1 00:29:27.092 }, 00:29:27.092 "trid": { 00:29:27.092 "adrfam": "IPv4", 00:29:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.092 "traddr": "10.0.0.2", 00:29:27.092 "trsvcid": "4420", 00:29:27.092 "trtype": "TCP" 00:29:27.092 }, 00:29:27.092 "vs": { 00:29:27.092 "nvme_version": "1.3" 00:29:27.092 } 00:29:27.092 } 00:29:27.092 ] 00:29:27.092 }, 00:29:27.092 "memory_domains": [ 00:29:27.092 { 00:29:27.092 "dma_device_id": "system", 00:29:27.092 "dma_device_type": 1 00:29:27.092 } 00:29:27.092 ], 00:29:27.092 "name": "nvme0n1", 00:29:27.092 "num_blocks": 2097152, 00:29:27.092 "product_name": "NVMe disk", 00:29:27.092 "supported_io_types": { 00:29:27.092 "abort": true, 00:29:27.092 "compare": true, 00:29:27.092 "compare_and_write": true, 00:29:27.092 "flush": true, 00:29:27.092 "nvme_admin": true, 00:29:27.092 "nvme_io": true, 00:29:27.092 "read": true, 00:29:27.092 "reset": true, 00:29:27.092 "unmap": false, 00:29:27.092 "write": true, 00:29:27.092 "write_zeroes": true 00:29:27.092 }, 00:29:27.092 "uuid": "7ee657be-2e15-4eb1-b251-aced0493862b", 00:29:27.092 "zoned": false 00:29:27.092 } 00:29:27.092 ] 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jpvr8kIhH3 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jpvr8kIhH3 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.092 [2024-05-15 13:47:39.993093] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:27.092 [2024-05-15 13:47:39.993294] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jpvr8kIhH3 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.092 13:47:39 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.092 [2024-05-15 13:47:40.001057] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jpvr8kIhH3 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.092 [2024-05-15 13:47:40.013051] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:27.092 [2024-05-15 13:47:40.013111] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:29:27.092 nvme0n1 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.092 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.092 [ 00:29:27.092 { 00:29:27.092 "aliases": [ 00:29:27.092 "7ee657be-2e15-4eb1-b251-aced0493862b" 00:29:27.092 ], 00:29:27.092 "assigned_rate_limits": { 00:29:27.092 "r_mbytes_per_sec": 0, 00:29:27.092 "rw_ios_per_sec": 0, 00:29:27.092 "rw_mbytes_per_sec": 0, 00:29:27.092 "w_mbytes_per_sec": 0 00:29:27.092 }, 00:29:27.092 "block_size": 512, 00:29:27.092 "claimed": false, 00:29:27.092 "driver_specific": { 00:29:27.092 "mp_policy": "active_passive", 00:29:27.092 "nvme": [ 00:29:27.092 { 00:29:27.092 "ctrlr_data": { 00:29:27.092 "ana_reporting": false, 00:29:27.092 "cntlid": 3, 00:29:27.092 "firmware_revision": "24.05", 00:29:27.092 "model_number": "SPDK bdev Controller", 00:29:27.092 "multi_ctrlr": true, 00:29:27.092 "oacs": { 00:29:27.092 "firmware": 0, 00:29:27.092 "format": 0, 00:29:27.092 "ns_manage": 0, 00:29:27.092 "security": 0 00:29:27.092 }, 00:29:27.092 "serial_number": "00000000000000000000", 00:29:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.092 "vendor_id": "0x8086" 00:29:27.092 }, 00:29:27.092 "ns_data": { 00:29:27.092 "can_share": true, 00:29:27.092 "id": 1 00:29:27.092 }, 00:29:27.092 "trid": { 00:29:27.092 "adrfam": "IPv4", 00:29:27.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:27.092 "traddr": "10.0.0.2", 00:29:27.092 "trsvcid": "4421", 00:29:27.092 "trtype": "TCP" 00:29:27.093 }, 00:29:27.093 "vs": { 00:29:27.093 "nvme_version": "1.3" 00:29:27.093 } 00:29:27.093 } 00:29:27.093 ] 00:29:27.093 }, 00:29:27.093 "memory_domains": [ 00:29:27.093 { 00:29:27.093 "dma_device_id": "system", 00:29:27.093 "dma_device_type": 1 00:29:27.093 } 00:29:27.093 ], 00:29:27.093 "name": "nvme0n1", 00:29:27.093 "num_blocks": 2097152, 00:29:27.093 "product_name": "NVMe disk", 00:29:27.093 "supported_io_types": { 00:29:27.093 "abort": true, 00:29:27.093 "compare": true, 00:29:27.093 "compare_and_write": true, 00:29:27.093 "flush": true, 00:29:27.093 "nvme_admin": true, 00:29:27.093 "nvme_io": true, 00:29:27.093 "read": true, 00:29:27.093 "reset": true, 00:29:27.093 "unmap": false, 00:29:27.093 "write": true, 00:29:27.093 "write_zeroes": true 00:29:27.093 }, 00:29:27.093 "uuid": "7ee657be-2e15-4eb1-b251-aced0493862b", 00:29:27.093 "zoned": false 00:29:27.093 } 00:29:27.093 ] 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.jpvr8kIhH3 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.093 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.351 rmmod nvme_tcp 00:29:27.351 rmmod nvme_fabrics 00:29:27.351 rmmod nvme_keyring 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 105116 ']' 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 105116 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 105116 ']' 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 105116 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105116 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:27.351 killing process with pid 105116 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105116' 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 105116 00:29:27.351 [2024-05-15 13:47:40.278293] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:29:27.351 [2024-05-15 13:47:40.278347] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:27.351 [2024-05-15 13:47:40.278360] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:27.351 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 105116 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:27.609 00:29:27.609 real 0m2.656s 00:29:27.609 user 0m2.521s 00:29:27.609 sys 0m0.672s 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:27.609 13:47:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:27.609 ************************************ 00:29:27.609 END TEST nvmf_async_init 00:29:27.609 ************************************ 00:29:27.609 13:47:40 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:27.609 13:47:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:27.609 13:47:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:27.609 13:47:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:27.609 ************************************ 00:29:27.609 START TEST dma 00:29:27.609 ************************************ 00:29:27.609 13:47:40 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:27.609 * Looking for test storage... 00:29:27.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:27.609 13:47:40 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.609 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:27.609 13:47:40 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.609 13:47:40 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.609 13:47:40 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.609 13:47:40 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.610 13:47:40 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.610 13:47:40 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.610 13:47:40 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:29:27.610 13:47:40 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.610 13:47:40 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.610 13:47:40 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:27.610 13:47:40 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:29:27.610 00:29:27.610 real 0m0.096s 00:29:27.610 user 0m0.045s 00:29:27.610 sys 0m0.058s 00:29:27.610 13:47:40 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:27.610 13:47:40 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:29:27.610 ************************************ 00:29:27.610 END TEST dma 00:29:27.610 ************************************ 00:29:27.610 13:47:40 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:27.610 13:47:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:27.610 13:47:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:27.610 13:47:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:27.868 ************************************ 00:29:27.868 START TEST nvmf_identify 00:29:27.868 ************************************ 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:27.868 * Looking for test storage... 00:29:27.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:27.868 Cannot find device "nvmf_tgt_br" 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:27.868 Cannot find device "nvmf_tgt_br2" 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:27.868 Cannot find device "nvmf_tgt_br" 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:27.868 Cannot find device "nvmf_tgt_br2" 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:29:27.868 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:27.869 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:27.869 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:27.869 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:27.869 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:29:27.869 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:28.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:28.127 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:29:28.127 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:28.127 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:28.127 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:28.127 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:28.127 13:47:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:28.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:29:28.127 00:29:28.127 --- 10.0.0.2 ping statistics --- 00:29:28.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.127 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:28.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:28.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:29:28.127 00:29:28.127 --- 10.0.0.3 ping statistics --- 00:29:28.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.127 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:28.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:29:28.127 00:29:28.127 --- 10.0.0.1 ping statistics --- 00:29:28.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.127 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=105389 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 105389 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 105389 ']' 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:28.127 13:47:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:28.385 [2024-05-15 13:47:41.264282] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:28.385 [2024-05-15 13:47:41.264369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.385 [2024-05-15 13:47:41.391122] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:28.385 [2024-05-15 13:47:41.407467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:28.644 [2024-05-15 13:47:41.508246] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.644 [2024-05-15 13:47:41.508318] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.644 [2024-05-15 13:47:41.508330] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.644 [2024-05-15 13:47:41.508338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.644 [2024-05-15 13:47:41.508346] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.644 [2024-05-15 13:47:41.508475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.644 [2024-05-15 13:47:41.508747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.644 [2024-05-15 13:47:41.509503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.644 [2024-05-15 13:47:41.509510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.225 [2024-05-15 13:47:42.263969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.225 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.484 Malloc0 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.484 [2024-05-15 13:47:42.381706] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:29.484 [2024-05-15 13:47:42.381980] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:29.484 [ 00:29:29.484 { 00:29:29.484 "allow_any_host": true, 00:29:29.484 "hosts": [], 00:29:29.484 "listen_addresses": [ 00:29:29.484 { 00:29:29.484 "adrfam": "IPv4", 00:29:29.484 "traddr": "10.0.0.2", 00:29:29.484 "trsvcid": "4420", 00:29:29.484 "trtype": "TCP" 00:29:29.484 } 00:29:29.484 ], 00:29:29.484 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:29.484 "subtype": "Discovery" 00:29:29.484 }, 00:29:29.484 { 00:29:29.484 "allow_any_host": true, 00:29:29.484 "hosts": [], 00:29:29.484 "listen_addresses": [ 00:29:29.484 { 00:29:29.484 "adrfam": "IPv4", 00:29:29.484 "traddr": "10.0.0.2", 00:29:29.484 "trsvcid": "4420", 00:29:29.484 "trtype": "TCP" 00:29:29.484 } 00:29:29.484 ], 00:29:29.484 "max_cntlid": 65519, 00:29:29.484 "max_namespaces": 32, 00:29:29.484 "min_cntlid": 1, 00:29:29.484 "model_number": "SPDK bdev Controller", 00:29:29.484 "namespaces": [ 00:29:29.484 { 00:29:29.484 "bdev_name": "Malloc0", 00:29:29.484 "eui64": "ABCDEF0123456789", 00:29:29.484 "name": "Malloc0", 00:29:29.484 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:29.484 "nsid": 1, 00:29:29.484 "uuid": "d87a2f46-a27d-4e3a-89fc-8d4bfd184c9e" 00:29:29.484 } 00:29:29.484 ], 00:29:29.484 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:29.484 "serial_number": "SPDK00000000000001", 00:29:29.484 "subtype": "NVMe" 00:29:29.484 } 00:29:29.484 ] 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:29.484 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:29.484 [2024-05-15 13:47:42.436766] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:29.484 [2024-05-15 13:47:42.436817] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105442 ] 00:29:29.484 [2024-05-15 13:47:42.556397] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:29.484 [2024-05-15 13:47:42.574952] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:29:29.484 [2024-05-15 13:47:42.575040] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:29.484 [2024-05-15 13:47:42.575048] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:29.484 [2024-05-15 13:47:42.575065] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:29.484 [2024-05-15 13:47:42.575076] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:29.484 [2024-05-15 13:47:42.575247] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:29:29.484 [2024-05-15 13:47:42.575298] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x118a590 0 00:29:29.747 [2024-05-15 13:47:42.587627] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:29.747 [2024-05-15 13:47:42.587662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:29.747 [2024-05-15 13:47:42.587676] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:29.747 [2024-05-15 13:47:42.587682] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:29.747 [2024-05-15 13:47:42.587735] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.587743] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.587748] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.747 [2024-05-15 13:47:42.587784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:29.747 [2024-05-15 13:47:42.587829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.747 [2024-05-15 13:47:42.595622] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.747 [2024-05-15 13:47:42.595648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.747 [2024-05-15 13:47:42.595654] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.595659] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.747 [2024-05-15 13:47:42.595676] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:29.747 [2024-05-15 13:47:42.595687] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:29:29.747 [2024-05-15 13:47:42.595694] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:29:29.747 [2024-05-15 13:47:42.595712] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.595718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.595722] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.747 [2024-05-15 13:47:42.595733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.747 [2024-05-15 13:47:42.595765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.747 [2024-05-15 13:47:42.595852] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.747 [2024-05-15 13:47:42.595859] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.747 [2024-05-15 13:47:42.595863] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.595868] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.747 [2024-05-15 13:47:42.595875] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:29:29.747 [2024-05-15 13:47:42.595883] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:29:29.747 [2024-05-15 13:47:42.595891] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.595896] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.595900] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.747 [2024-05-15 13:47:42.595908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.747 [2024-05-15 13:47:42.595928] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.747 [2024-05-15 13:47:42.595986] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.747 [2024-05-15 13:47:42.595994] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.747 [2024-05-15 13:47:42.595997] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596002] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.747 [2024-05-15 13:47:42.596009] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:29:29.747 [2024-05-15 13:47:42.596018] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:29:29.747 [2024-05-15 13:47:42.596026] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596030] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596034] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.747 [2024-05-15 13:47:42.596042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.747 [2024-05-15 13:47:42.596060] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.747 [2024-05-15 13:47:42.596117] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.747 [2024-05-15 13:47:42.596123] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.747 [2024-05-15 13:47:42.596127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596132] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.747 [2024-05-15 13:47:42.596139] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:29.747 [2024-05-15 13:47:42.596150] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596154] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596158] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.747 [2024-05-15 13:47:42.596166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.747 [2024-05-15 13:47:42.596183] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.747 [2024-05-15 13:47:42.596248] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.747 [2024-05-15 13:47:42.596256] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.747 [2024-05-15 13:47:42.596260] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596265] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.747 [2024-05-15 13:47:42.596271] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:29:29.747 [2024-05-15 13:47:42.596276] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:29:29.747 [2024-05-15 13:47:42.596285] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:29.747 [2024-05-15 13:47:42.596391] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:29:29.747 [2024-05-15 13:47:42.596396] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:29.747 [2024-05-15 13:47:42.596407] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596412] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.747 [2024-05-15 13:47:42.596416] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.596424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.748 [2024-05-15 13:47:42.596445] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.748 [2024-05-15 13:47:42.596507] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.748 [2024-05-15 13:47:42.596515] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.748 [2024-05-15 13:47:42.596518] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596523] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.748 [2024-05-15 13:47:42.596530] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:29.748 [2024-05-15 13:47:42.596541] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596546] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596550] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.596558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.748 [2024-05-15 13:47:42.596575] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.748 [2024-05-15 13:47:42.596644] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.748 [2024-05-15 13:47:42.596653] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.748 [2024-05-15 13:47:42.596657] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596661] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.748 [2024-05-15 13:47:42.596667] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:29.748 [2024-05-15 13:47:42.596673] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:29:29.748 [2024-05-15 13:47:42.596682] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:29:29.748 [2024-05-15 13:47:42.596700] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:29:29.748 [2024-05-15 13:47:42.596711] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596716] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.596724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.748 [2024-05-15 13:47:42.596746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.748 [2024-05-15 13:47:42.596861] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.748 [2024-05-15 13:47:42.596870] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.748 [2024-05-15 13:47:42.596874] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596879] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x118a590): datao=0, datal=4096, cccid=0 00:29:29.748 [2024-05-15 13:47:42.596884] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11d1330) on tqpair(0x118a590): expected_datao=0, payload_size=4096 00:29:29.748 [2024-05-15 13:47:42.596890] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596899] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596904] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596914] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.748 [2024-05-15 13:47:42.596920] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.748 [2024-05-15 13:47:42.596924] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596928] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.748 [2024-05-15 13:47:42.596939] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:29:29.748 [2024-05-15 13:47:42.596945] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:29:29.748 [2024-05-15 13:47:42.596950] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:29:29.748 [2024-05-15 13:47:42.596956] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:29:29.748 [2024-05-15 13:47:42.596961] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:29:29.748 [2024-05-15 13:47:42.596966] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:29:29.748 [2024-05-15 13:47:42.596980] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:29:29.748 [2024-05-15 13:47:42.596992] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.596997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.597009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:29.748 [2024-05-15 13:47:42.597030] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.748 [2024-05-15 13:47:42.597099] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.748 [2024-05-15 13:47:42.597106] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.748 [2024-05-15 13:47:42.597110] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597114] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1330) on tqpair=0x118a590 00:29:29.748 [2024-05-15 13:47:42.597124] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597129] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.597140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.748 [2024-05-15 13:47:42.597147] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597151] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597155] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.597161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.748 [2024-05-15 13:47:42.597169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597173] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.597183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.748 [2024-05-15 13:47:42.597189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597194] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597198] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.597205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.748 [2024-05-15 13:47:42.597210] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:29:29.748 [2024-05-15 13:47:42.597224] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:29.748 [2024-05-15 13:47:42.597232] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597236] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.597243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.748 [2024-05-15 13:47:42.597264] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1330, cid 0, qid 0 00:29:29.748 [2024-05-15 13:47:42.597272] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1490, cid 1, qid 0 00:29:29.748 [2024-05-15 13:47:42.597277] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d15f0, cid 2, qid 0 00:29:29.748 [2024-05-15 13:47:42.597282] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.748 [2024-05-15 13:47:42.597287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d18b0, cid 4, qid 0 00:29:29.748 [2024-05-15 13:47:42.597386] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.748 [2024-05-15 13:47:42.597393] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.748 [2024-05-15 13:47:42.597396] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597401] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d18b0) on tqpair=0x118a590 00:29:29.748 [2024-05-15 13:47:42.597407] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:29:29.748 [2024-05-15 13:47:42.597413] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:29:29.748 [2024-05-15 13:47:42.597424] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x118a590) 00:29:29.748 [2024-05-15 13:47:42.597437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.748 [2024-05-15 13:47:42.597455] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d18b0, cid 4, qid 0 00:29:29.748 [2024-05-15 13:47:42.597521] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.748 [2024-05-15 13:47:42.597534] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.748 [2024-05-15 13:47:42.597538] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597542] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x118a590): datao=0, datal=4096, cccid=4 00:29:29.748 [2024-05-15 13:47:42.597548] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11d18b0) on tqpair(0x118a590): expected_datao=0, payload_size=4096 00:29:29.748 [2024-05-15 13:47:42.597553] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597560] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597565] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597574] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.748 [2024-05-15 13:47:42.597580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.748 [2024-05-15 13:47:42.597584] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.748 [2024-05-15 13:47:42.597589] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d18b0) on tqpair=0x118a590 00:29:29.748 [2024-05-15 13:47:42.597616] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:29:29.748 [2024-05-15 13:47:42.597667] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x118a590) 00:29:29.749 [2024-05-15 13:47:42.597687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.749 [2024-05-15 13:47:42.597696] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597700] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597704] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x118a590) 00:29:29.749 [2024-05-15 13:47:42.597711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:29.749 [2024-05-15 13:47:42.597743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d18b0, cid 4, qid 0 00:29:29.749 [2024-05-15 13:47:42.597751] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1a10, cid 5, qid 0 00:29:29.749 [2024-05-15 13:47:42.597874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.749 [2024-05-15 13:47:42.597882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.749 [2024-05-15 13:47:42.597886] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597890] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x118a590): datao=0, datal=1024, cccid=4 00:29:29.749 [2024-05-15 13:47:42.597895] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11d18b0) on tqpair(0x118a590): expected_datao=0, payload_size=1024 00:29:29.749 [2024-05-15 13:47:42.597900] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597907] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597911] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.749 [2024-05-15 13:47:42.597923] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.749 [2024-05-15 13:47:42.597927] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.597931] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1a10) on tqpair=0x118a590 00:29:29.749 [2024-05-15 13:47:42.638720] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.749 [2024-05-15 13:47:42.638770] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.749 [2024-05-15 13:47:42.638777] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.638783] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d18b0) on tqpair=0x118a590 00:29:29.749 [2024-05-15 13:47:42.638815] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.638822] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x118a590) 00:29:29.749 [2024-05-15 13:47:42.638838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.749 [2024-05-15 13:47:42.638881] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d18b0, cid 4, qid 0 00:29:29.749 [2024-05-15 13:47:42.639001] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.749 [2024-05-15 13:47:42.639009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.749 [2024-05-15 13:47:42.639013] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639017] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x118a590): datao=0, datal=3072, cccid=4 00:29:29.749 [2024-05-15 13:47:42.639022] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11d18b0) on tqpair(0x118a590): expected_datao=0, payload_size=3072 00:29:29.749 [2024-05-15 13:47:42.639028] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639038] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639043] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639052] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.749 [2024-05-15 13:47:42.639059] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.749 [2024-05-15 13:47:42.639063] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639067] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d18b0) on tqpair=0x118a590 00:29:29.749 [2024-05-15 13:47:42.639079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639084] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x118a590) 00:29:29.749 [2024-05-15 13:47:42.639092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.749 [2024-05-15 13:47:42.639119] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d18b0, cid 4, qid 0 00:29:29.749 [2024-05-15 13:47:42.639197] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:29.749 [2024-05-15 13:47:42.639204] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:29.749 [2024-05-15 13:47:42.639208] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639212] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x118a590): datao=0, datal=8, cccid=4 00:29:29.749 [2024-05-15 13:47:42.639217] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x11d18b0) on tqpair(0x118a590): expected_datao=0, payload_size=8 00:29:29.749 [2024-05-15 13:47:42.639222] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639229] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.639238] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.682665] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.749 [2024-05-15 13:47:42.682714] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.749 [2024-05-15 13:47:42.682721] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.749 [2024-05-15 13:47:42.682726] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d18b0) on tqpair=0x118a590 00:29:29.749 ===================================================== 00:29:29.749 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:29.749 ===================================================== 00:29:29.749 Controller Capabilities/Features 00:29:29.749 ================================ 00:29:29.749 Vendor ID: 0000 00:29:29.749 Subsystem Vendor ID: 0000 00:29:29.749 Serial Number: .................... 00:29:29.749 Model Number: ........................................ 00:29:29.749 Firmware Version: 24.05 00:29:29.749 Recommended Arb Burst: 0 00:29:29.749 IEEE OUI Identifier: 00 00 00 00:29:29.749 Multi-path I/O 00:29:29.749 May have multiple subsystem ports: No 00:29:29.749 May have multiple controllers: No 00:29:29.749 Associated with SR-IOV VF: No 00:29:29.749 Max Data Transfer Size: 131072 00:29:29.749 Max Number of Namespaces: 0 00:29:29.749 Max Number of I/O Queues: 1024 00:29:29.749 NVMe Specification Version (VS): 1.3 00:29:29.749 NVMe Specification Version (Identify): 1.3 00:29:29.749 Maximum Queue Entries: 128 00:29:29.749 Contiguous Queues Required: Yes 00:29:29.749 Arbitration Mechanisms Supported 00:29:29.749 Weighted Round Robin: Not Supported 00:29:29.749 Vendor Specific: Not Supported 00:29:29.749 Reset Timeout: 15000 ms 00:29:29.749 Doorbell Stride: 4 bytes 00:29:29.749 NVM Subsystem Reset: Not Supported 00:29:29.749 Command Sets Supported 00:29:29.749 NVM Command Set: Supported 00:29:29.749 Boot Partition: Not Supported 00:29:29.749 Memory Page Size Minimum: 4096 bytes 00:29:29.749 Memory Page Size Maximum: 4096 bytes 00:29:29.749 Persistent Memory Region: Not Supported 00:29:29.749 Optional Asynchronous Events Supported 00:29:29.749 Namespace Attribute Notices: Not Supported 00:29:29.749 Firmware Activation Notices: Not Supported 00:29:29.749 ANA Change Notices: Not Supported 00:29:29.749 PLE Aggregate Log Change Notices: Not Supported 00:29:29.749 LBA Status Info Alert Notices: Not Supported 00:29:29.749 EGE Aggregate Log Change Notices: Not Supported 00:29:29.749 Normal NVM Subsystem Shutdown event: Not Supported 00:29:29.749 Zone Descriptor Change Notices: Not Supported 00:29:29.749 Discovery Log Change Notices: Supported 00:29:29.749 Controller Attributes 00:29:29.749 128-bit Host Identifier: Not Supported 00:29:29.749 Non-Operational Permissive Mode: Not Supported 00:29:29.749 NVM Sets: Not Supported 00:29:29.749 Read Recovery Levels: Not Supported 00:29:29.749 Endurance Groups: Not Supported 00:29:29.749 Predictable Latency Mode: Not Supported 00:29:29.749 Traffic Based Keep ALive: Not Supported 00:29:29.749 Namespace Granularity: Not Supported 00:29:29.749 SQ Associations: Not Supported 00:29:29.749 UUID List: Not Supported 00:29:29.749 Multi-Domain Subsystem: Not Supported 00:29:29.749 Fixed Capacity Management: Not Supported 00:29:29.749 Variable Capacity Management: Not Supported 00:29:29.749 Delete Endurance Group: Not Supported 00:29:29.749 Delete NVM Set: Not Supported 00:29:29.749 Extended LBA Formats Supported: Not Supported 00:29:29.749 Flexible Data Placement Supported: Not Supported 00:29:29.749 00:29:29.749 Controller Memory Buffer Support 00:29:29.749 ================================ 00:29:29.749 Supported: No 00:29:29.749 00:29:29.749 Persistent Memory Region Support 00:29:29.749 ================================ 00:29:29.749 Supported: No 00:29:29.749 00:29:29.749 Admin Command Set Attributes 00:29:29.749 ============================ 00:29:29.749 Security Send/Receive: Not Supported 00:29:29.749 Format NVM: Not Supported 00:29:29.749 Firmware Activate/Download: Not Supported 00:29:29.749 Namespace Management: Not Supported 00:29:29.749 Device Self-Test: Not Supported 00:29:29.749 Directives: Not Supported 00:29:29.749 NVMe-MI: Not Supported 00:29:29.749 Virtualization Management: Not Supported 00:29:29.749 Doorbell Buffer Config: Not Supported 00:29:29.749 Get LBA Status Capability: Not Supported 00:29:29.749 Command & Feature Lockdown Capability: Not Supported 00:29:29.749 Abort Command Limit: 1 00:29:29.749 Async Event Request Limit: 4 00:29:29.749 Number of Firmware Slots: N/A 00:29:29.749 Firmware Slot 1 Read-Only: N/A 00:29:29.749 Firmware Activation Without Reset: N/A 00:29:29.749 Multiple Update Detection Support: N/A 00:29:29.750 Firmware Update Granularity: No Information Provided 00:29:29.750 Per-Namespace SMART Log: No 00:29:29.750 Asymmetric Namespace Access Log Page: Not Supported 00:29:29.750 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:29.750 Command Effects Log Page: Not Supported 00:29:29.750 Get Log Page Extended Data: Supported 00:29:29.750 Telemetry Log Pages: Not Supported 00:29:29.750 Persistent Event Log Pages: Not Supported 00:29:29.750 Supported Log Pages Log Page: May Support 00:29:29.750 Commands Supported & Effects Log Page: Not Supported 00:29:29.750 Feature Identifiers & Effects Log Page:May Support 00:29:29.750 NVMe-MI Commands & Effects Log Page: May Support 00:29:29.750 Data Area 4 for Telemetry Log: Not Supported 00:29:29.750 Error Log Page Entries Supported: 128 00:29:29.750 Keep Alive: Not Supported 00:29:29.750 00:29:29.750 NVM Command Set Attributes 00:29:29.750 ========================== 00:29:29.750 Submission Queue Entry Size 00:29:29.750 Max: 1 00:29:29.750 Min: 1 00:29:29.750 Completion Queue Entry Size 00:29:29.750 Max: 1 00:29:29.750 Min: 1 00:29:29.750 Number of Namespaces: 0 00:29:29.750 Compare Command: Not Supported 00:29:29.750 Write Uncorrectable Command: Not Supported 00:29:29.750 Dataset Management Command: Not Supported 00:29:29.750 Write Zeroes Command: Not Supported 00:29:29.750 Set Features Save Field: Not Supported 00:29:29.750 Reservations: Not Supported 00:29:29.750 Timestamp: Not Supported 00:29:29.750 Copy: Not Supported 00:29:29.750 Volatile Write Cache: Not Present 00:29:29.750 Atomic Write Unit (Normal): 1 00:29:29.750 Atomic Write Unit (PFail): 1 00:29:29.750 Atomic Compare & Write Unit: 1 00:29:29.750 Fused Compare & Write: Supported 00:29:29.750 Scatter-Gather List 00:29:29.750 SGL Command Set: Supported 00:29:29.750 SGL Keyed: Supported 00:29:29.750 SGL Bit Bucket Descriptor: Not Supported 00:29:29.750 SGL Metadata Pointer: Not Supported 00:29:29.750 Oversized SGL: Not Supported 00:29:29.750 SGL Metadata Address: Not Supported 00:29:29.750 SGL Offset: Supported 00:29:29.750 Transport SGL Data Block: Not Supported 00:29:29.750 Replay Protected Memory Block: Not Supported 00:29:29.750 00:29:29.750 Firmware Slot Information 00:29:29.750 ========================= 00:29:29.750 Active slot: 0 00:29:29.750 00:29:29.750 00:29:29.750 Error Log 00:29:29.750 ========= 00:29:29.750 00:29:29.750 Active Namespaces 00:29:29.750 ================= 00:29:29.750 Discovery Log Page 00:29:29.750 ================== 00:29:29.750 Generation Counter: 2 00:29:29.750 Number of Records: 2 00:29:29.750 Record Format: 0 00:29:29.750 00:29:29.750 Discovery Log Entry 0 00:29:29.750 ---------------------- 00:29:29.750 Transport Type: 3 (TCP) 00:29:29.750 Address Family: 1 (IPv4) 00:29:29.750 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:29.750 Entry Flags: 00:29:29.750 Duplicate Returned Information: 1 00:29:29.750 Explicit Persistent Connection Support for Discovery: 1 00:29:29.750 Transport Requirements: 00:29:29.750 Secure Channel: Not Required 00:29:29.750 Port ID: 0 (0x0000) 00:29:29.750 Controller ID: 65535 (0xffff) 00:29:29.750 Admin Max SQ Size: 128 00:29:29.750 Transport Service Identifier: 4420 00:29:29.750 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:29.750 Transport Address: 10.0.0.2 00:29:29.750 Discovery Log Entry 1 00:29:29.750 ---------------------- 00:29:29.750 Transport Type: 3 (TCP) 00:29:29.750 Address Family: 1 (IPv4) 00:29:29.750 Subsystem Type: 2 (NVM Subsystem) 00:29:29.750 Entry Flags: 00:29:29.750 Duplicate Returned Information: 0 00:29:29.750 Explicit Persistent Connection Support for Discovery: 0 00:29:29.750 Transport Requirements: 00:29:29.750 Secure Channel: Not Required 00:29:29.750 Port ID: 0 (0x0000) 00:29:29.750 Controller ID: 65535 (0xffff) 00:29:29.750 Admin Max SQ Size: 128 00:29:29.750 Transport Service Identifier: 4420 00:29:29.750 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:29.750 Transport Address: 10.0.0.2 [2024-05-15 13:47:42.682913] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:29:29.750 [2024-05-15 13:47:42.682958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.750 [2024-05-15 13:47:42.682968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.750 [2024-05-15 13:47:42.682975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.750 [2024-05-15 13:47:42.682982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:29.750 [2024-05-15 13:47:42.682998] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683003] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683008] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.750 [2024-05-15 13:47:42.683020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.750 [2024-05-15 13:47:42.683055] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.750 [2024-05-15 13:47:42.683151] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.750 [2024-05-15 13:47:42.683159] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.750 [2024-05-15 13:47:42.683163] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683168] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.750 [2024-05-15 13:47:42.683178] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683182] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683186] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.750 [2024-05-15 13:47:42.683194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.750 [2024-05-15 13:47:42.683219] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.750 [2024-05-15 13:47:42.683311] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.750 [2024-05-15 13:47:42.683318] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.750 [2024-05-15 13:47:42.683322] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683326] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.750 [2024-05-15 13:47:42.683333] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:29:29.750 [2024-05-15 13:47:42.683338] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:29:29.750 [2024-05-15 13:47:42.683348] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683353] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683357] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.750 [2024-05-15 13:47:42.683364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.750 [2024-05-15 13:47:42.683383] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.750 [2024-05-15 13:47:42.683444] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.750 [2024-05-15 13:47:42.683451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.750 [2024-05-15 13:47:42.683455] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683459] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.750 [2024-05-15 13:47:42.683471] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683476] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683480] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.750 [2024-05-15 13:47:42.683487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.750 [2024-05-15 13:47:42.683505] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.750 [2024-05-15 13:47:42.683561] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.750 [2024-05-15 13:47:42.683568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.750 [2024-05-15 13:47:42.683571] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683576] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.750 [2024-05-15 13:47:42.683587] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.750 [2024-05-15 13:47:42.683592] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.683618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.683641] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.683703] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.683710] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.683713] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683718] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.683730] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683736] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.683747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.683765] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.683821] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.683828] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.683832] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683836] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.683848] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683853] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683857] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.683864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.683882] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.683938] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.683945] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.683949] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683953] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.683965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683970] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.683974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.683982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.683999] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684055] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684061] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.684065] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684070] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.684081] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684086] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684090] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.684098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.684115] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684169] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684176] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.684180] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684184] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.684195] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684200] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684204] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.684212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.684240] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684307] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.684311] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684316] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.684328] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684332] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684336] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.684344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.684364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684421] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.684432] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684436] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.684448] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684453] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684464] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.684472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.684490] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684549] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684557] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.684560] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684565] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.684576] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684581] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684585] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.684592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.684622] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684681] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684689] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.684693] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684697] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.684709] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684714] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684718] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.684726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.684744] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684800] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684808] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.751 [2024-05-15 13:47:42.684812] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684816] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.751 [2024-05-15 13:47:42.684827] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684832] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.751 [2024-05-15 13:47:42.684836] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.751 [2024-05-15 13:47:42.684844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.751 [2024-05-15 13:47:42.684862] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.751 [2024-05-15 13:47:42.684917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.751 [2024-05-15 13:47:42.684924] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.684928] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.684932] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.684944] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.684948] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.684952] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.684960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.684978] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685034] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685041] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685045] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685049] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685065] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685069] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685095] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685149] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685158] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685162] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685167] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685178] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685183] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685270] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685283] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685287] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685292] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685304] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685309] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685313] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685340] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685396] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685403] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685407] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685412] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685423] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685428] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685432] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685522] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685527] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685538] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685543] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685547] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685572] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685653] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685662] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685670] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685683] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685687] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685719] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685776] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685783] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685787] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685791] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685811] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685836] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.685895] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.685902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.685906] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685910] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.685922] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685927] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.685931] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.685938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.685956] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.686013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.686025] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.686030] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.686035] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.686047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.686052] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.686056] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.686064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.686083] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.752 [2024-05-15 13:47:42.686142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.752 [2024-05-15 13:47:42.686149] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.752 [2024-05-15 13:47:42.686153] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.686158] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.752 [2024-05-15 13:47:42.686169] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.686174] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.752 [2024-05-15 13:47:42.686178] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.752 [2024-05-15 13:47:42.686185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.752 [2024-05-15 13:47:42.686204] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.753 [2024-05-15 13:47:42.686259] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.753 [2024-05-15 13:47:42.686267] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.753 [2024-05-15 13:47:42.686270] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686275] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.753 [2024-05-15 13:47:42.686286] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686291] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686295] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.753 [2024-05-15 13:47:42.686303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-05-15 13:47:42.686320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.753 [2024-05-15 13:47:42.686379] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.753 [2024-05-15 13:47:42.686386] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.753 [2024-05-15 13:47:42.686390] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.753 [2024-05-15 13:47:42.686406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686411] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686415] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.753 [2024-05-15 13:47:42.686422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-05-15 13:47:42.686440] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.753 [2024-05-15 13:47:42.686498] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.753 [2024-05-15 13:47:42.686510] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.753 [2024-05-15 13:47:42.686515] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686519] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.753 [2024-05-15 13:47:42.686531] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686536] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.686540] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.753 [2024-05-15 13:47:42.686548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-05-15 13:47:42.686577] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.753 [2024-05-15 13:47:42.691620] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.753 [2024-05-15 13:47:42.691644] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.753 [2024-05-15 13:47:42.691650] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.691655] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.753 [2024-05-15 13:47:42.691673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.691678] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.691683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x118a590) 00:29:29.753 [2024-05-15 13:47:42.691692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.753 [2024-05-15 13:47:42.691721] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x11d1750, cid 3, qid 0 00:29:29.753 [2024-05-15 13:47:42.691785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:29.753 [2024-05-15 13:47:42.691792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:29.753 [2024-05-15 13:47:42.691797] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:29.753 [2024-05-15 13:47:42.691801] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x11d1750) on tqpair=0x118a590 00:29:29.753 [2024-05-15 13:47:42.691811] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:29:29.753 00:29:29.753 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:29.753 [2024-05-15 13:47:42.728396] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:29.753 [2024-05-15 13:47:42.728452] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105446 ] 00:29:30.016 [2024-05-15 13:47:42.847347] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:30.016 [2024-05-15 13:47:42.865901] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:29:30.016 [2024-05-15 13:47:42.865966] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:30.016 [2024-05-15 13:47:42.865974] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:30.016 [2024-05-15 13:47:42.865989] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:30.016 [2024-05-15 13:47:42.865999] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:30.016 [2024-05-15 13:47:42.866146] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:29:30.016 [2024-05-15 13:47:42.866196] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c12590 0 00:29:30.016 [2024-05-15 13:47:42.870622] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:30.016 [2024-05-15 13:47:42.870646] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:30.016 [2024-05-15 13:47:42.870658] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:30.016 [2024-05-15 13:47:42.870662] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:30.016 [2024-05-15 13:47:42.870711] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.016 [2024-05-15 13:47:42.870719] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.016 [2024-05-15 13:47:42.870724] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.016 [2024-05-15 13:47:42.870739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:30.016 [2024-05-15 13:47:42.870780] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.016 [2024-05-15 13:47:42.878628] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.016 [2024-05-15 13:47:42.878649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.016 [2024-05-15 13:47:42.878655] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.016 [2024-05-15 13:47:42.878660] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.016 [2024-05-15 13:47:42.878676] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:30.016 [2024-05-15 13:47:42.878685] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:29:30.016 [2024-05-15 13:47:42.878692] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:29:30.016 [2024-05-15 13:47:42.878709] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.016 [2024-05-15 13:47:42.878715] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.016 [2024-05-15 13:47:42.878719] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.016 [2024-05-15 13:47:42.878729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.016 [2024-05-15 13:47:42.878759] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.016 [2024-05-15 13:47:42.878834] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.016 [2024-05-15 13:47:42.878842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.016 [2024-05-15 13:47:42.878846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.016 [2024-05-15 13:47:42.878850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.016 [2024-05-15 13:47:42.878857] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:29:30.017 [2024-05-15 13:47:42.878866] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:29:30.017 [2024-05-15 13:47:42.878874] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.878878] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.878882] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.878890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.017 [2024-05-15 13:47:42.878910] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.017 [2024-05-15 13:47:42.878972] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.017 [2024-05-15 13:47:42.878979] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.017 [2024-05-15 13:47:42.878983] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.878988] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.017 [2024-05-15 13:47:42.878995] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:29:30.017 [2024-05-15 13:47:42.879004] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:29:30.017 [2024-05-15 13:47:42.879011] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879020] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.879028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.017 [2024-05-15 13:47:42.879046] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.017 [2024-05-15 13:47:42.879113] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.017 [2024-05-15 13:47:42.879120] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.017 [2024-05-15 13:47:42.879124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879128] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.017 [2024-05-15 13:47:42.879135] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:30.017 [2024-05-15 13:47:42.879145] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879150] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879154] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.879162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.017 [2024-05-15 13:47:42.879181] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.017 [2024-05-15 13:47:42.879236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.017 [2024-05-15 13:47:42.879244] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.017 [2024-05-15 13:47:42.879247] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879252] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.017 [2024-05-15 13:47:42.879258] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:29:30.017 [2024-05-15 13:47:42.879263] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:29:30.017 [2024-05-15 13:47:42.879272] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:30.017 [2024-05-15 13:47:42.879378] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:29:30.017 [2024-05-15 13:47:42.879383] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:30.017 [2024-05-15 13:47:42.879393] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879398] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879402] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.879409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.017 [2024-05-15 13:47:42.879429] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.017 [2024-05-15 13:47:42.879485] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.017 [2024-05-15 13:47:42.879492] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.017 [2024-05-15 13:47:42.879496] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879500] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.017 [2024-05-15 13:47:42.879507] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:30.017 [2024-05-15 13:47:42.879517] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879522] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879526] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.879534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.017 [2024-05-15 13:47:42.879552] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.017 [2024-05-15 13:47:42.879621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.017 [2024-05-15 13:47:42.879630] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.017 [2024-05-15 13:47:42.879633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879638] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.017 [2024-05-15 13:47:42.879645] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:30.017 [2024-05-15 13:47:42.879650] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:29:30.017 [2024-05-15 13:47:42.879659] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:29:30.017 [2024-05-15 13:47:42.879676] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:29:30.017 [2024-05-15 13:47:42.879687] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879691] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.879700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.017 [2024-05-15 13:47:42.879723] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.017 [2024-05-15 13:47:42.879832] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.017 [2024-05-15 13:47:42.879839] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.017 [2024-05-15 13:47:42.879843] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879848] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=4096, cccid=0 00:29:30.017 [2024-05-15 13:47:42.879853] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c59330) on tqpair(0x1c12590): expected_datao=0, payload_size=4096 00:29:30.017 [2024-05-15 13:47:42.879858] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879867] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879871] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879880] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.017 [2024-05-15 13:47:42.879887] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.017 [2024-05-15 13:47:42.879891] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879895] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.017 [2024-05-15 13:47:42.879905] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:29:30.017 [2024-05-15 13:47:42.879911] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:29:30.017 [2024-05-15 13:47:42.879916] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:29:30.017 [2024-05-15 13:47:42.879921] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:29:30.017 [2024-05-15 13:47:42.879926] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:29:30.017 [2024-05-15 13:47:42.879932] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:29:30.017 [2024-05-15 13:47:42.879947] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:29:30.017 [2024-05-15 13:47:42.879958] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879964] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.879968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.879976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:30.017 [2024-05-15 13:47:42.879997] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.017 [2024-05-15 13:47:42.880059] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.017 [2024-05-15 13:47:42.880066] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.017 [2024-05-15 13:47:42.880070] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.880074] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59330) on tqpair=0x1c12590 00:29:30.017 [2024-05-15 13:47:42.880084] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.880088] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.880092] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.880099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.017 [2024-05-15 13:47:42.880106] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.880111] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.880115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c12590) 00:29:30.017 [2024-05-15 13:47:42.880121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.017 [2024-05-15 13:47:42.880128] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.880132] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.017 [2024-05-15 13:47:42.880136] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.880142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.018 [2024-05-15 13:47:42.880149] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880153] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880157] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.880163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.018 [2024-05-15 13:47:42.880169] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880183] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880191] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880196] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.880203] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.018 [2024-05-15 13:47:42.880225] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59330, cid 0, qid 0 00:29:30.018 [2024-05-15 13:47:42.880245] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59490, cid 1, qid 0 00:29:30.018 [2024-05-15 13:47:42.880251] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c595f0, cid 2, qid 0 00:29:30.018 [2024-05-15 13:47:42.880256] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.018 [2024-05-15 13:47:42.880261] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c598b0, cid 4, qid 0 00:29:30.018 [2024-05-15 13:47:42.880368] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.018 [2024-05-15 13:47:42.880375] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.018 [2024-05-15 13:47:42.880379] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880384] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c598b0) on tqpair=0x1c12590 00:29:30.018 [2024-05-15 13:47:42.880391] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:29:30.018 [2024-05-15 13:47:42.880396] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880411] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880418] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880426] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880431] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880435] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.880442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:30.018 [2024-05-15 13:47:42.880464] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c598b0, cid 4, qid 0 00:29:30.018 [2024-05-15 13:47:42.880527] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.018 [2024-05-15 13:47:42.880535] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.018 [2024-05-15 13:47:42.880539] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880543] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c598b0) on tqpair=0x1c12590 00:29:30.018 [2024-05-15 13:47:42.880600] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880631] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880646] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.880654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.018 [2024-05-15 13:47:42.880677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c598b0, cid 4, qid 0 00:29:30.018 [2024-05-15 13:47:42.880748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.018 [2024-05-15 13:47:42.880756] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.018 [2024-05-15 13:47:42.880760] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880764] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=4096, cccid=4 00:29:30.018 [2024-05-15 13:47:42.880769] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c598b0) on tqpair(0x1c12590): expected_datao=0, payload_size=4096 00:29:30.018 [2024-05-15 13:47:42.880774] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880782] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880786] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880795] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.018 [2024-05-15 13:47:42.880802] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.018 [2024-05-15 13:47:42.880805] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880810] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c598b0) on tqpair=0x1c12590 00:29:30.018 [2024-05-15 13:47:42.880827] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:29:30.018 [2024-05-15 13:47:42.880839] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880850] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.880859] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880863] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.880871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.018 [2024-05-15 13:47:42.880892] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c598b0, cid 4, qid 0 00:29:30.018 [2024-05-15 13:47:42.880968] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.018 [2024-05-15 13:47:42.880988] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.018 [2024-05-15 13:47:42.880993] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.880997] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=4096, cccid=4 00:29:30.018 [2024-05-15 13:47:42.881002] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c598b0) on tqpair(0x1c12590): expected_datao=0, payload_size=4096 00:29:30.018 [2024-05-15 13:47:42.881007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881015] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881019] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881028] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.018 [2024-05-15 13:47:42.881035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.018 [2024-05-15 13:47:42.881039] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881043] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c598b0) on tqpair=0x1c12590 00:29:30.018 [2024-05-15 13:47:42.881057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881068] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881077] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881082] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.881090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.018 [2024-05-15 13:47:42.881111] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c598b0, cid 4, qid 0 00:29:30.018 [2024-05-15 13:47:42.881185] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.018 [2024-05-15 13:47:42.881193] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.018 [2024-05-15 13:47:42.881196] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881201] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=4096, cccid=4 00:29:30.018 [2024-05-15 13:47:42.881206] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c598b0) on tqpair(0x1c12590): expected_datao=0, payload_size=4096 00:29:30.018 [2024-05-15 13:47:42.881210] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881218] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881222] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881231] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.018 [2024-05-15 13:47:42.881237] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.018 [2024-05-15 13:47:42.881241] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881245] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c598b0) on tqpair=0x1c12590 00:29:30.018 [2024-05-15 13:47:42.881260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881280] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881293] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881299] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:29:30.018 [2024-05-15 13:47:42.881304] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:29:30.018 [2024-05-15 13:47:42.881310] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:29:30.018 [2024-05-15 13:47:42.881344] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881351] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c12590) 00:29:30.018 [2024-05-15 13:47:42.881358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.018 [2024-05-15 13:47:42.881366] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881371] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.018 [2024-05-15 13:47:42.881374] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.019 [2024-05-15 13:47:42.881408] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c598b0, cid 4, qid 0 00:29:30.019 [2024-05-15 13:47:42.881417] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59a10, cid 5, qid 0 00:29:30.019 [2024-05-15 13:47:42.881487] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.881494] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.881498] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881503] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c598b0) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.881511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.881517] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.881521] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881534] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59a10) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.881546] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881551] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.019 [2024-05-15 13:47:42.881577] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59a10, cid 5, qid 0 00:29:30.019 [2024-05-15 13:47:42.881649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.881659] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.881663] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881667] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59a10) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.881679] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881684] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.019 [2024-05-15 13:47:42.881712] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59a10, cid 5, qid 0 00:29:30.019 [2024-05-15 13:47:42.881771] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.881778] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.881782] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881786] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59a10) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.881798] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881802] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.019 [2024-05-15 13:47:42.881828] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59a10, cid 5, qid 0 00:29:30.019 [2024-05-15 13:47:42.881881] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.881888] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.881892] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881896] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59a10) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.881911] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881917] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.019 [2024-05-15 13:47:42.881932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.019 [2024-05-15 13:47:42.881951] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881956] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.019 [2024-05-15 13:47:42.881978] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.881983] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c12590) 00:29:30.019 [2024-05-15 13:47:42.881990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.019 [2024-05-15 13:47:42.882011] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59a10, cid 5, qid 0 00:29:30.019 [2024-05-15 13:47:42.882019] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c598b0, cid 4, qid 0 00:29:30.019 [2024-05-15 13:47:42.882024] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59b70, cid 6, qid 0 00:29:30.019 [2024-05-15 13:47:42.882029] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59cd0, cid 7, qid 0 00:29:30.019 [2024-05-15 13:47:42.882191] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.019 [2024-05-15 13:47:42.882209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.019 [2024-05-15 13:47:42.882214] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882218] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=8192, cccid=5 00:29:30.019 [2024-05-15 13:47:42.882223] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c59a10) on tqpair(0x1c12590): expected_datao=0, payload_size=8192 00:29:30.019 [2024-05-15 13:47:42.882228] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882249] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882254] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882261] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.019 [2024-05-15 13:47:42.882267] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.019 [2024-05-15 13:47:42.882271] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882275] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=512, cccid=4 00:29:30.019 [2024-05-15 13:47:42.882280] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c598b0) on tqpair(0x1c12590): expected_datao=0, payload_size=512 00:29:30.019 [2024-05-15 13:47:42.882285] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882291] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882295] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882301] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.019 [2024-05-15 13:47:42.882307] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.019 [2024-05-15 13:47:42.882311] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882315] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=512, cccid=6 00:29:30.019 [2024-05-15 13:47:42.882319] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c59b70) on tqpair(0x1c12590): expected_datao=0, payload_size=512 00:29:30.019 [2024-05-15 13:47:42.882324] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882331] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882334] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882340] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:30.019 [2024-05-15 13:47:42.882346] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:30.019 [2024-05-15 13:47:42.882350] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882354] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c12590): datao=0, datal=4096, cccid=7 00:29:30.019 [2024-05-15 13:47:42.882359] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c59cd0) on tqpair(0x1c12590): expected_datao=0, payload_size=4096 00:29:30.019 [2024-05-15 13:47:42.882363] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882370] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882375] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882383] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.882389] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.882393] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882397] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59a10) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.882415] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.882422] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.882426] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882430] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c598b0) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.882441] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.882448] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.882452] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882456] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59b70) on tqpair=0x1c12590 00:29:30.019 [2024-05-15 13:47:42.882467] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.019 [2024-05-15 13:47:42.882474] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.019 [2024-05-15 13:47:42.882478] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.019 [2024-05-15 13:47:42.882482] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59cd0) on tqpair=0x1c12590 00:29:30.019 ===================================================== 00:29:30.019 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:30.019 ===================================================== 00:29:30.019 Controller Capabilities/Features 00:29:30.019 ================================ 00:29:30.019 Vendor ID: 8086 00:29:30.019 Subsystem Vendor ID: 8086 00:29:30.019 Serial Number: SPDK00000000000001 00:29:30.019 Model Number: SPDK bdev Controller 00:29:30.019 Firmware Version: 24.05 00:29:30.019 Recommended Arb Burst: 6 00:29:30.019 IEEE OUI Identifier: e4 d2 5c 00:29:30.019 Multi-path I/O 00:29:30.019 May have multiple subsystem ports: Yes 00:29:30.019 May have multiple controllers: Yes 00:29:30.019 Associated with SR-IOV VF: No 00:29:30.019 Max Data Transfer Size: 131072 00:29:30.019 Max Number of Namespaces: 32 00:29:30.019 Max Number of I/O Queues: 127 00:29:30.020 NVMe Specification Version (VS): 1.3 00:29:30.020 NVMe Specification Version (Identify): 1.3 00:29:30.020 Maximum Queue Entries: 128 00:29:30.020 Contiguous Queues Required: Yes 00:29:30.020 Arbitration Mechanisms Supported 00:29:30.020 Weighted Round Robin: Not Supported 00:29:30.020 Vendor Specific: Not Supported 00:29:30.020 Reset Timeout: 15000 ms 00:29:30.020 Doorbell Stride: 4 bytes 00:29:30.020 NVM Subsystem Reset: Not Supported 00:29:30.020 Command Sets Supported 00:29:30.020 NVM Command Set: Supported 00:29:30.020 Boot Partition: Not Supported 00:29:30.020 Memory Page Size Minimum: 4096 bytes 00:29:30.020 Memory Page Size Maximum: 4096 bytes 00:29:30.020 Persistent Memory Region: Not Supported 00:29:30.020 Optional Asynchronous Events Supported 00:29:30.020 Namespace Attribute Notices: Supported 00:29:30.020 Firmware Activation Notices: Not Supported 00:29:30.020 ANA Change Notices: Not Supported 00:29:30.020 PLE Aggregate Log Change Notices: Not Supported 00:29:30.020 LBA Status Info Alert Notices: Not Supported 00:29:30.020 EGE Aggregate Log Change Notices: Not Supported 00:29:30.020 Normal NVM Subsystem Shutdown event: Not Supported 00:29:30.020 Zone Descriptor Change Notices: Not Supported 00:29:30.020 Discovery Log Change Notices: Not Supported 00:29:30.020 Controller Attributes 00:29:30.020 128-bit Host Identifier: Supported 00:29:30.020 Non-Operational Permissive Mode: Not Supported 00:29:30.020 NVM Sets: Not Supported 00:29:30.020 Read Recovery Levels: Not Supported 00:29:30.020 Endurance Groups: Not Supported 00:29:30.020 Predictable Latency Mode: Not Supported 00:29:30.020 Traffic Based Keep ALive: Not Supported 00:29:30.020 Namespace Granularity: Not Supported 00:29:30.020 SQ Associations: Not Supported 00:29:30.020 UUID List: Not Supported 00:29:30.020 Multi-Domain Subsystem: Not Supported 00:29:30.020 Fixed Capacity Management: Not Supported 00:29:30.020 Variable Capacity Management: Not Supported 00:29:30.020 Delete Endurance Group: Not Supported 00:29:30.020 Delete NVM Set: Not Supported 00:29:30.020 Extended LBA Formats Supported: Not Supported 00:29:30.020 Flexible Data Placement Supported: Not Supported 00:29:30.020 00:29:30.020 Controller Memory Buffer Support 00:29:30.020 ================================ 00:29:30.020 Supported: No 00:29:30.020 00:29:30.020 Persistent Memory Region Support 00:29:30.020 ================================ 00:29:30.020 Supported: No 00:29:30.020 00:29:30.020 Admin Command Set Attributes 00:29:30.020 ============================ 00:29:30.020 Security Send/Receive: Not Supported 00:29:30.020 Format NVM: Not Supported 00:29:30.020 Firmware Activate/Download: Not Supported 00:29:30.020 Namespace Management: Not Supported 00:29:30.020 Device Self-Test: Not Supported 00:29:30.020 Directives: Not Supported 00:29:30.020 NVMe-MI: Not Supported 00:29:30.020 Virtualization Management: Not Supported 00:29:30.020 Doorbell Buffer Config: Not Supported 00:29:30.020 Get LBA Status Capability: Not Supported 00:29:30.020 Command & Feature Lockdown Capability: Not Supported 00:29:30.020 Abort Command Limit: 4 00:29:30.020 Async Event Request Limit: 4 00:29:30.020 Number of Firmware Slots: N/A 00:29:30.020 Firmware Slot 1 Read-Only: N/A 00:29:30.020 Firmware Activation Without Reset: N/A 00:29:30.020 Multiple Update Detection Support: N/A 00:29:30.020 Firmware Update Granularity: No Information Provided 00:29:30.020 Per-Namespace SMART Log: No 00:29:30.020 Asymmetric Namespace Access Log Page: Not Supported 00:29:30.020 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:30.020 Command Effects Log Page: Supported 00:29:30.020 Get Log Page Extended Data: Supported 00:29:30.020 Telemetry Log Pages: Not Supported 00:29:30.020 Persistent Event Log Pages: Not Supported 00:29:30.020 Supported Log Pages Log Page: May Support 00:29:30.020 Commands Supported & Effects Log Page: Not Supported 00:29:30.020 Feature Identifiers & Effects Log Page:May Support 00:29:30.020 NVMe-MI Commands & Effects Log Page: May Support 00:29:30.020 Data Area 4 for Telemetry Log: Not Supported 00:29:30.020 Error Log Page Entries Supported: 128 00:29:30.020 Keep Alive: Supported 00:29:30.020 Keep Alive Granularity: 10000 ms 00:29:30.020 00:29:30.020 NVM Command Set Attributes 00:29:30.020 ========================== 00:29:30.020 Submission Queue Entry Size 00:29:30.020 Max: 64 00:29:30.020 Min: 64 00:29:30.020 Completion Queue Entry Size 00:29:30.020 Max: 16 00:29:30.020 Min: 16 00:29:30.020 Number of Namespaces: 32 00:29:30.020 Compare Command: Supported 00:29:30.020 Write Uncorrectable Command: Not Supported 00:29:30.020 Dataset Management Command: Supported 00:29:30.020 Write Zeroes Command: Supported 00:29:30.020 Set Features Save Field: Not Supported 00:29:30.020 Reservations: Supported 00:29:30.020 Timestamp: Not Supported 00:29:30.020 Copy: Supported 00:29:30.020 Volatile Write Cache: Present 00:29:30.020 Atomic Write Unit (Normal): 1 00:29:30.020 Atomic Write Unit (PFail): 1 00:29:30.020 Atomic Compare & Write Unit: 1 00:29:30.020 Fused Compare & Write: Supported 00:29:30.020 Scatter-Gather List 00:29:30.020 SGL Command Set: Supported 00:29:30.020 SGL Keyed: Supported 00:29:30.020 SGL Bit Bucket Descriptor: Not Supported 00:29:30.020 SGL Metadata Pointer: Not Supported 00:29:30.020 Oversized SGL: Not Supported 00:29:30.020 SGL Metadata Address: Not Supported 00:29:30.020 SGL Offset: Supported 00:29:30.020 Transport SGL Data Block: Not Supported 00:29:30.020 Replay Protected Memory Block: Not Supported 00:29:30.020 00:29:30.020 Firmware Slot Information 00:29:30.020 ========================= 00:29:30.020 Active slot: 1 00:29:30.020 Slot 1 Firmware Revision: 24.05 00:29:30.020 00:29:30.020 00:29:30.020 Commands Supported and Effects 00:29:30.020 ============================== 00:29:30.020 Admin Commands 00:29:30.020 -------------- 00:29:30.020 Get Log Page (02h): Supported 00:29:30.020 Identify (06h): Supported 00:29:30.020 Abort (08h): Supported 00:29:30.020 Set Features (09h): Supported 00:29:30.020 Get Features (0Ah): Supported 00:29:30.020 Asynchronous Event Request (0Ch): Supported 00:29:30.020 Keep Alive (18h): Supported 00:29:30.020 I/O Commands 00:29:30.020 ------------ 00:29:30.020 Flush (00h): Supported LBA-Change 00:29:30.020 Write (01h): Supported LBA-Change 00:29:30.020 Read (02h): Supported 00:29:30.020 Compare (05h): Supported 00:29:30.020 Write Zeroes (08h): Supported LBA-Change 00:29:30.020 Dataset Management (09h): Supported LBA-Change 00:29:30.020 Copy (19h): Supported LBA-Change 00:29:30.020 Unknown (79h): Supported LBA-Change 00:29:30.020 Unknown (7Ah): Supported 00:29:30.020 00:29:30.020 Error Log 00:29:30.020 ========= 00:29:30.020 00:29:30.020 Arbitration 00:29:30.020 =========== 00:29:30.020 Arbitration Burst: 1 00:29:30.020 00:29:30.020 Power Management 00:29:30.020 ================ 00:29:30.020 Number of Power States: 1 00:29:30.020 Current Power State: Power State #0 00:29:30.020 Power State #0: 00:29:30.020 Max Power: 0.00 W 00:29:30.020 Non-Operational State: Operational 00:29:30.020 Entry Latency: Not Reported 00:29:30.020 Exit Latency: Not Reported 00:29:30.020 Relative Read Throughput: 0 00:29:30.020 Relative Read Latency: 0 00:29:30.020 Relative Write Throughput: 0 00:29:30.020 Relative Write Latency: 0 00:29:30.020 Idle Power: Not Reported 00:29:30.020 Active Power: Not Reported 00:29:30.020 Non-Operational Permissive Mode: Not Supported 00:29:30.020 00:29:30.020 Health Information 00:29:30.020 ================== 00:29:30.020 Critical Warnings: 00:29:30.020 Available Spare Space: OK 00:29:30.020 Temperature: OK 00:29:30.020 Device Reliability: OK 00:29:30.020 Read Only: No 00:29:30.020 Volatile Memory Backup: OK 00:29:30.020 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:30.020 Temperature Threshold: [2024-05-15 13:47:42.882593] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.020 [2024-05-15 13:47:42.886613] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c12590) 00:29:30.020 [2024-05-15 13:47:42.886636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.020 [2024-05-15 13:47:42.886668] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59cd0, cid 7, qid 0 00:29:30.020 [2024-05-15 13:47:42.886748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.020 [2024-05-15 13:47:42.886757] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.020 [2024-05-15 13:47:42.886761] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.020 [2024-05-15 13:47:42.886765] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59cd0) on tqpair=0x1c12590 00:29:30.020 [2024-05-15 13:47:42.886813] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:29:30.020 [2024-05-15 13:47:42.886827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.020 [2024-05-15 13:47:42.886835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.020 [2024-05-15 13:47:42.886842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.020 [2024-05-15 13:47:42.886848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.021 [2024-05-15 13:47:42.886858] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.886863] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.886867] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.886875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.886899] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.886960] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.886967] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.886971] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.886976] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.886985] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.886989] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.886993] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.887001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.887023] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.887098] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.887110] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.887115] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887119] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.887126] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:29:30.021 [2024-05-15 13:47:42.887131] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:29:30.021 [2024-05-15 13:47:42.887143] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887147] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887151] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.887159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.887178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.887238] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.887245] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.887249] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887254] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.887266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887275] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.887283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.887301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.887358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.887370] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.887385] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887389] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.887401] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887406] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887410] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.887417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.887437] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.887490] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.887497] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.887501] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887505] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.887517] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887522] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887526] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.887533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.887551] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.887618] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.887627] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.887631] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887636] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.887647] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887652] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887656] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.887664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.887685] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.887746] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.887761] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.887766] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887771] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.887783] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887792] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.021 [2024-05-15 13:47:42.887800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.021 [2024-05-15 13:47:42.887820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.021 [2024-05-15 13:47:42.887877] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.021 [2024-05-15 13:47:42.887884] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.021 [2024-05-15 13:47:42.887888] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887892] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.021 [2024-05-15 13:47:42.887904] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.021 [2024-05-15 13:47:42.887908] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.887912] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.887920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.887938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.887994] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888008] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888030] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888039] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888066] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.888124] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888135] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888139] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888151] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888156] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888160] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.888255] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888263] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888267] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888296] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888300] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888329] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.888387] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888394] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888398] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888403] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888415] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888419] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888423] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888449] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.888517] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888524] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888528] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888533] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888544] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888549] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888553] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888580] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.888649] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888658] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888662] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888666] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888678] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888683] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888687] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.888773] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888780] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888784] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888789] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888800] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888805] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888835] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.888892] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.888899] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.888903] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888908] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.888919] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888924] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.888928] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.888936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.888954] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.889010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.889017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.889021] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889025] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.889037] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889042] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889046] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.889053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.889072] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.889125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.889132] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.889136] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889140] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.889152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889157] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889161] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.889168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.889186] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.889241] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.889253] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.889258] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889262] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.889274] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889279] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889283] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.889291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.022 [2024-05-15 13:47:42.889310] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.022 [2024-05-15 13:47:42.889370] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.022 [2024-05-15 13:47:42.889381] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.022 [2024-05-15 13:47:42.889386] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889390] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.022 [2024-05-15 13:47:42.889402] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889407] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.022 [2024-05-15 13:47:42.889411] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.022 [2024-05-15 13:47:42.889419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.889438] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.889492] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.889500] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.889503] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889508] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.889519] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889524] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889528] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.889536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.889554] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.889625] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.889634] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.889638] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889642] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.889654] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889659] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889663] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.889670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.889691] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.889747] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.889758] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.889763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889767] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.889780] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889789] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.889796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.889816] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.889870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.889881] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.889886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.889902] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889907] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.889911] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.889919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.889938] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.889995] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.890002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.890006] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890010] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.890022] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890027] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890031] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.890039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.890057] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.890111] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.890118] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.890122] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890126] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.890138] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890142] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890146] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.890154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.890172] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.890229] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.890236] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.890240] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890244] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.890255] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.890272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.890290] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.890349] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.890356] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.890360] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890364] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.890376] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890381] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890385] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.890393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.890411] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.890466] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.890478] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.890482] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890487] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.890499] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890504] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.890508] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.890515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.890535] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.890593] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.894616] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.894633] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.894639] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.894656] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.894661] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.894666] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c12590) 00:29:30.023 [2024-05-15 13:47:42.894674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.023 [2024-05-15 13:47:42.894701] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c59750, cid 3, qid 0 00:29:30.023 [2024-05-15 13:47:42.894772] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:30.023 [2024-05-15 13:47:42.894780] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:30.023 [2024-05-15 13:47:42.894784] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:30.023 [2024-05-15 13:47:42.894788] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c59750) on tqpair=0x1c12590 00:29:30.023 [2024-05-15 13:47:42.894797] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:29:30.023 0 Kelvin (-273 Celsius) 00:29:30.023 Available Spare: 0% 00:29:30.023 Available Spare Threshold: 0% 00:29:30.023 Life Percentage Used: 0% 00:29:30.023 Data Units Read: 0 00:29:30.023 Data Units Written: 0 00:29:30.023 Host Read Commands: 0 00:29:30.023 Host Write Commands: 0 00:29:30.023 Controller Busy Time: 0 minutes 00:29:30.023 Power Cycles: 0 00:29:30.023 Power On Hours: 0 hours 00:29:30.023 Unsafe Shutdowns: 0 00:29:30.023 Unrecoverable Media Errors: 0 00:29:30.023 Lifetime Error Log Entries: 0 00:29:30.023 Warning Temperature Time: 0 minutes 00:29:30.023 Critical Temperature Time: 0 minutes 00:29:30.023 00:29:30.023 Number of Queues 00:29:30.023 ================ 00:29:30.023 Number of I/O Submission Queues: 127 00:29:30.023 Number of I/O Completion Queues: 127 00:29:30.023 00:29:30.023 Active Namespaces 00:29:30.023 ================= 00:29:30.023 Namespace ID:1 00:29:30.023 Error Recovery Timeout: Unlimited 00:29:30.023 Command Set Identifier: NVM (00h) 00:29:30.023 Deallocate: Supported 00:29:30.024 Deallocated/Unwritten Error: Not Supported 00:29:30.024 Deallocated Read Value: Unknown 00:29:30.024 Deallocate in Write Zeroes: Not Supported 00:29:30.024 Deallocated Guard Field: 0xFFFF 00:29:30.024 Flush: Supported 00:29:30.024 Reservation: Supported 00:29:30.024 Namespace Sharing Capabilities: Multiple Controllers 00:29:30.024 Size (in LBAs): 131072 (0GiB) 00:29:30.024 Capacity (in LBAs): 131072 (0GiB) 00:29:30.024 Utilization (in LBAs): 131072 (0GiB) 00:29:30.024 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:30.024 EUI64: ABCDEF0123456789 00:29:30.024 UUID: d87a2f46-a27d-4e3a-89fc-8d4bfd184c9e 00:29:30.024 Thin Provisioning: Not Supported 00:29:30.024 Per-NS Atomic Units: Yes 00:29:30.024 Atomic Boundary Size (Normal): 0 00:29:30.024 Atomic Boundary Size (PFail): 0 00:29:30.024 Atomic Boundary Offset: 0 00:29:30.024 Maximum Single Source Range Length: 65535 00:29:30.024 Maximum Copy Length: 65535 00:29:30.024 Maximum Source Range Count: 1 00:29:30.024 NGUID/EUI64 Never Reused: No 00:29:30.024 Namespace Write Protected: No 00:29:30.024 Number of LBA Formats: 1 00:29:30.024 Current LBA Format: LBA Format #00 00:29:30.024 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:30.024 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.024 13:47:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.024 rmmod nvme_tcp 00:29:30.024 rmmod nvme_fabrics 00:29:30.024 rmmod nvme_keyring 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 105389 ']' 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 105389 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 105389 ']' 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 105389 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105389 00:29:30.024 killing process with pid 105389 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105389' 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 105389 00:29:30.024 [2024-05-15 13:47:43.038329] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:30.024 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 105389 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:30.283 00:29:30.283 real 0m2.602s 00:29:30.283 user 0m7.272s 00:29:30.283 sys 0m0.670s 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:30.283 13:47:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:30.283 ************************************ 00:29:30.283 END TEST nvmf_identify 00:29:30.283 ************************************ 00:29:30.283 13:47:43 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:30.283 13:47:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:30.283 13:47:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.283 13:47:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:30.283 ************************************ 00:29:30.283 START TEST nvmf_perf 00:29:30.283 ************************************ 00:29:30.283 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:30.543 * Looking for test storage... 00:29:30.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:30.543 Cannot find device "nvmf_tgt_br" 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:30.543 Cannot find device "nvmf_tgt_br2" 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:30.543 Cannot find device "nvmf_tgt_br" 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:30.543 Cannot find device "nvmf_tgt_br2" 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:30.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:30.543 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:30.543 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:30.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:29:30.842 00:29:30.842 --- 10.0.0.2 ping statistics --- 00:29:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.842 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:30.842 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:30.842 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.101 ms 00:29:30.842 00:29:30.842 --- 10.0.0.3 ping statistics --- 00:29:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.842 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:30.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:29:30.842 00:29:30.842 --- 10.0.0.1 ping statistics --- 00:29:30.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.842 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=105609 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 105609 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 105609 ']' 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:30.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:30.842 13:47:43 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:30.842 [2024-05-15 13:47:43.844346] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:29:30.842 [2024-05-15 13:47:43.844460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.112 [2024-05-15 13:47:43.965762] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:31.112 [2024-05-15 13:47:43.986120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:31.112 [2024-05-15 13:47:44.094536] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.112 [2024-05-15 13:47:44.094624] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.112 [2024-05-15 13:47:44.094641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.112 [2024-05-15 13:47:44.094651] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.112 [2024-05-15 13:47:44.094661] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.112 [2024-05-15 13:47:44.095504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.112 [2024-05-15 13:47:44.095684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.112 [2024-05-15 13:47:44.095843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:31.112 [2024-05-15 13:47:44.095864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.676 13:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:31.676 13:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:29:31.676 13:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.676 13:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.676 13:47:44 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:31.933 13:47:44 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.933 13:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:31.933 13:47:44 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:29:32.191 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:32.191 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:29:32.448 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:29:32.448 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:32.706 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:32.706 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:29:32.706 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:32.706 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:32.706 13:47:45 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:32.964 [2024-05-15 13:47:46.035072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.221 13:47:46 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.479 13:47:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:33.479 13:47:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.738 13:47:46 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:33.738 13:47:46 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:33.738 13:47:46 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.996 [2024-05-15 13:47:47.056060] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:33.996 [2024-05-15 13:47:47.056399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.996 13:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:34.254 13:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:34.254 13:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:34.254 13:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:34.254 13:47:47 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:29:35.626 Initializing NVMe Controllers 00:29:35.626 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:29:35.626 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:29:35.627 Initialization complete. Launching workers. 00:29:35.627 ======================================================== 00:29:35.627 Latency(us) 00:29:35.627 Device Information : IOPS MiB/s Average min max 00:29:35.627 PCIE (0000:00:10.0) NSID 1 from core 0: 23996.11 93.73 1333.72 309.78 7020.19 00:29:35.627 ======================================================== 00:29:35.627 Total : 23996.11 93.73 1333.72 309.78 7020.19 00:29:35.627 00:29:35.627 13:47:48 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:37.001 Initializing NVMe Controllers 00:29:37.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:37.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:37.001 Initialization complete. Launching workers. 00:29:37.001 ======================================================== 00:29:37.001 Latency(us) 00:29:37.001 Device Information : IOPS MiB/s Average min max 00:29:37.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3508.98 13.71 284.67 114.32 4237.08 00:29:37.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.00 0.48 8194.44 7953.91 12040.01 00:29:37.001 ======================================================== 00:29:37.001 Total : 3631.97 14.19 552.54 114.32 12040.01 00:29:37.001 00:29:37.001 13:47:49 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.371 Initializing NVMe Controllers 00:29:38.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:38.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:38.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:38.371 Initialization complete. Launching workers. 00:29:38.371 ======================================================== 00:29:38.371 Latency(us) 00:29:38.371 Device Information : IOPS MiB/s Average min max 00:29:38.371 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8761.56 34.22 3653.48 676.20 7421.19 00:29:38.372 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2675.34 10.45 12061.50 7175.01 20463.58 00:29:38.372 ======================================================== 00:29:38.372 Total : 11436.90 44.68 5620.30 676.20 20463.58 00:29:38.372 00:29:38.372 13:47:51 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:29:38.372 13:47:51 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:40.900 Initializing NVMe Controllers 00:29:40.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.900 Controller IO queue size 128, less than required. 00:29:40.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.900 Controller IO queue size 128, less than required. 00:29:40.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:40.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:40.900 Initialization complete. Launching workers. 00:29:40.900 ======================================================== 00:29:40.900 Latency(us) 00:29:40.900 Device Information : IOPS MiB/s Average min max 00:29:40.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1386.91 346.73 94413.69 61241.02 162692.00 00:29:40.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 546.78 136.70 240147.09 93388.87 431096.39 00:29:40.900 ======================================================== 00:29:40.900 Total : 1933.69 483.42 135622.10 61241.02 431096.39 00:29:40.900 00:29:40.900 13:47:53 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:40.900 Initializing NVMe Controllers 00:29:40.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:40.900 Controller IO queue size 128, less than required. 00:29:40.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.900 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:40.900 Controller IO queue size 128, less than required. 00:29:40.900 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:40.900 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:29:40.900 WARNING: Some requested NVMe devices were skipped 00:29:40.900 No valid NVMe controllers or AIO or URING devices found 00:29:40.900 13:47:53 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:43.426 Initializing NVMe Controllers 00:29:43.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:43.426 Controller IO queue size 128, less than required. 00:29:43.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.426 Controller IO queue size 128, less than required. 00:29:43.426 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:43.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:43.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:43.426 Initialization complete. Launching workers. 00:29:43.426 00:29:43.426 ==================== 00:29:43.426 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:43.426 TCP transport: 00:29:43.426 polls: 9484 00:29:43.426 idle_polls: 4400 00:29:43.426 sock_completions: 5084 00:29:43.426 nvme_completions: 3081 00:29:43.426 submitted_requests: 4572 00:29:43.426 queued_requests: 1 00:29:43.426 00:29:43.426 ==================== 00:29:43.426 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:43.426 TCP transport: 00:29:43.426 polls: 11689 00:29:43.426 idle_polls: 8167 00:29:43.426 sock_completions: 3522 00:29:43.426 nvme_completions: 6673 00:29:43.426 submitted_requests: 10130 00:29:43.426 queued_requests: 1 00:29:43.426 ======================================================== 00:29:43.426 Latency(us) 00:29:43.426 Device Information : IOPS MiB/s Average min max 00:29:43.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 769.87 192.47 174177.84 116081.47 286462.74 00:29:43.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1667.72 416.93 77102.58 38148.39 129819.28 00:29:43.426 ======================================================== 00:29:43.426 Total : 2437.59 609.40 107762.12 38148.39 286462.74 00:29:43.426 00:29:43.426 13:47:56 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:43.684 13:47:56 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:43.941 13:47:56 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:29:43.942 13:47:56 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:29:43.942 13:47:56 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:29:44.199 13:47:57 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9f4974e6-e9d8-40a6-b5d4-a5286a16edbf 00:29:44.199 13:47:57 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9f4974e6-e9d8-40a6-b5d4-a5286a16edbf 00:29:44.199 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=9f4974e6-e9d8-40a6-b5d4-a5286a16edbf 00:29:44.199 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:44.199 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:44.199 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:44.199 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:44.456 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:44.456 { 00:29:44.456 "base_bdev": "Nvme0n1", 00:29:44.456 "block_size": 4096, 00:29:44.456 "cluster_size": 4194304, 00:29:44.456 "free_clusters": 1278, 00:29:44.456 "name": "lvs_0", 00:29:44.456 "total_data_clusters": 1278, 00:29:44.456 "uuid": "9f4974e6-e9d8-40a6-b5d4-a5286a16edbf" 00:29:44.456 } 00:29:44.457 ]' 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="9f4974e6-e9d8-40a6-b5d4-a5286a16edbf") .free_clusters' 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="9f4974e6-e9d8-40a6-b5d4-a5286a16edbf") .cluster_size' 00:29:44.457 5112 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:29:44.457 13:47:57 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9f4974e6-e9d8-40a6-b5d4-a5286a16edbf lbd_0 5112 00:29:44.714 13:47:57 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=35c8eaf7-45eb-4963-b4f5-98cf38121023 00:29:44.714 13:47:57 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 35c8eaf7-45eb-4963-b4f5-98cf38121023 lvs_n_0 00:29:44.972 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=a289e83a-5151-4c66-959b-3ab5b14d96d6 00:29:44.972 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb a289e83a-5151-4c66-959b-3ab5b14d96d6 00:29:44.972 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=a289e83a-5151-4c66-959b-3ab5b14d96d6 00:29:44.972 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:29:44.972 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:29:44.972 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:29:44.972 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:45.538 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:29:45.538 { 00:29:45.538 "base_bdev": "Nvme0n1", 00:29:45.538 "block_size": 4096, 00:29:45.538 "cluster_size": 4194304, 00:29:45.538 "free_clusters": 0, 00:29:45.538 "name": "lvs_0", 00:29:45.538 "total_data_clusters": 1278, 00:29:45.538 "uuid": "9f4974e6-e9d8-40a6-b5d4-a5286a16edbf" 00:29:45.538 }, 00:29:45.538 { 00:29:45.538 "base_bdev": "35c8eaf7-45eb-4963-b4f5-98cf38121023", 00:29:45.538 "block_size": 4096, 00:29:45.538 "cluster_size": 4194304, 00:29:45.538 "free_clusters": 1276, 00:29:45.538 "name": "lvs_n_0", 00:29:45.538 "total_data_clusters": 1276, 00:29:45.538 "uuid": "a289e83a-5151-4c66-959b-3ab5b14d96d6" 00:29:45.538 } 00:29:45.538 ]' 00:29:45.538 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="a289e83a-5151-4c66-959b-3ab5b14d96d6") .free_clusters' 00:29:45.538 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:29:45.538 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="a289e83a-5151-4c66-959b-3ab5b14d96d6") .cluster_size' 00:29:45.538 5104 00:29:45.538 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:29:45.538 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:29:45.538 13:47:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:29:45.539 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:29:45.539 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a289e83a-5151-4c66-959b-3ab5b14d96d6 lbd_nest_0 5104 00:29:45.797 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=55a0a17b-5ab3-436e-8504-51c1e91c1fb5 00:29:45.797 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.055 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:29:46.055 13:47:58 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 55a0a17b-5ab3-436e-8504-51c1e91c1fb5 00:29:46.314 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.573 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:29:46.573 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:29:46.573 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:46.573 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:46.573 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:46.831 Initializing NVMe Controllers 00:29:46.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.831 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:29:46.831 WARNING: Some requested NVMe devices were skipped 00:29:46.831 No valid NVMe controllers or AIO or URING devices found 00:29:46.831 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:46.831 13:47:59 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.031 Initializing NVMe Controllers 00:29:59.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:59.031 Initialization complete. Launching workers. 00:29:59.031 ======================================================== 00:29:59.031 Latency(us) 00:29:59.031 Device Information : IOPS MiB/s Average min max 00:29:59.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 938.10 117.26 1065.65 360.65 7612.22 00:29:59.031 ======================================================== 00:29:59.031 Total : 938.10 117.26 1065.65 360.65 7612.22 00:29:59.031 00:29:59.031 13:48:10 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:59.031 13:48:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.031 13:48:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:59.031 Initializing NVMe Controllers 00:29:59.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:59.031 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:29:59.031 WARNING: Some requested NVMe devices were skipped 00:29:59.031 No valid NVMe controllers or AIO or URING devices found 00:29:59.031 13:48:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:59.031 13:48:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:09.060 Initializing NVMe Controllers 00:30:09.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.060 Initialization complete. Launching workers. 00:30:09.060 ======================================================== 00:30:09.060 Latency(us) 00:30:09.060 Device Information : IOPS MiB/s Average min max 00:30:09.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1084.76 135.60 29512.36 7858.94 264725.93 00:30:09.060 ======================================================== 00:30:09.060 Total : 1084.76 135.60 29512.36 7858.94 264725.93 00:30:09.060 00:30:09.060 13:48:20 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:09.060 13:48:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:09.060 13:48:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:09.060 Initializing NVMe Controllers 00:30:09.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.060 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:30:09.060 WARNING: Some requested NVMe devices were skipped 00:30:09.060 No valid NVMe controllers or AIO or URING devices found 00:30:09.060 13:48:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:09.060 13:48:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:19.028 Initializing NVMe Controllers 00:30:19.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.028 Controller IO queue size 128, less than required. 00:30:19.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:19.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:19.028 Initialization complete. Launching workers. 00:30:19.028 ======================================================== 00:30:19.028 Latency(us) 00:30:19.028 Device Information : IOPS MiB/s Average min max 00:30:19.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3966.93 495.87 32311.38 11363.82 74659.45 00:30:19.028 ======================================================== 00:30:19.028 Total : 3966.93 495.87 32311.38 11363.82 74659.45 00:30:19.028 00:30:19.028 13:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:19.028 13:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 55a0a17b-5ab3-436e-8504-51c1e91c1fb5 00:30:19.028 13:48:31 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:19.285 13:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 35c8eaf7-45eb-4963-b4f5-98cf38121023 00:30:19.543 13:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:19.802 rmmod nvme_tcp 00:30:19.802 rmmod nvme_fabrics 00:30:19.802 rmmod nvme_keyring 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 105609 ']' 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 105609 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 105609 ']' 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 105609 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 105609 00:30:19.802 killing process with pid 105609 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 105609' 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 105609 00:30:19.802 [2024-05-15 13:48:32.758283] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:19.802 13:48:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 105609 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:21.703 00:30:21.703 real 0m51.033s 00:30:21.703 user 3m12.465s 00:30:21.703 sys 0m10.971s 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:21.703 13:48:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:21.703 ************************************ 00:30:21.703 END TEST nvmf_perf 00:30:21.703 ************************************ 00:30:21.703 13:48:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:21.703 13:48:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:21.703 13:48:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:21.703 13:48:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:21.703 ************************************ 00:30:21.703 START TEST nvmf_fio_host 00:30:21.703 ************************************ 00:30:21.703 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:21.703 * Looking for test storage... 00:30:21.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:21.703 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:21.704 Cannot find device "nvmf_tgt_br" 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:21.704 Cannot find device "nvmf_tgt_br2" 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:21.704 Cannot find device "nvmf_tgt_br" 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:21.704 Cannot find device "nvmf_tgt_br2" 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:30:21.704 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:21.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:21.705 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:21.705 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:21.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:30:21.963 00:30:21.963 --- 10.0.0.2 ping statistics --- 00:30:21.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.963 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:21.963 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:21.963 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:30:21.963 00:30:21.963 --- 10.0.0.3 ping statistics --- 00:30:21.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.963 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:21.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:30:21.963 00:30:21.963 --- 10.0.0.1 ping statistics --- 00:30:21.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.963 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=106557 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 106557 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 106557 ']' 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:21.963 13:48:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:21.963 [2024-05-15 13:48:34.939830] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:30:21.963 [2024-05-15 13:48:34.939946] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.222 [2024-05-15 13:48:35.063850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:22.222 [2024-05-15 13:48:35.082984] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.222 [2024-05-15 13:48:35.177123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.222 [2024-05-15 13:48:35.177175] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.222 [2024-05-15 13:48:35.177187] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.222 [2024-05-15 13:48:35.177195] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.222 [2024-05-15 13:48:35.177203] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.222 [2024-05-15 13:48:35.177316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.222 [2024-05-15 13:48:35.177447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.222 [2024-05-15 13:48:35.178103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.222 [2024-05-15 13:48:35.178155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.187 13:48:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:23.187 13:48:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:30:23.187 13:48:35 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:23.187 13:48:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.187 13:48:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.187 [2024-05-15 13:48:35.988521] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.187 Malloc1 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.187 [2024-05-15 13:48:36.096987] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:23.187 [2024-05-15 13:48:36.097258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:23.187 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:23.188 13:48:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:23.188 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:23.188 fio-3.35 00:30:23.188 Starting 1 thread 00:30:25.717 00:30:25.717 test: (groupid=0, jobs=1): err= 0: pid=106636: Wed May 15 13:48:38 2024 00:30:25.717 read: IOPS=8471, BW=33.1MiB/s (34.7MB/s)(66.4MiB/2007msec) 00:30:25.717 slat (usec): min=2, max=218, avg= 2.50, stdev= 2.10 00:30:25.717 clat (usec): min=2422, max=16959, avg=7886.25, stdev=1097.15 00:30:25.717 lat (usec): min=2449, max=16961, avg=7888.75, stdev=1097.09 00:30:25.717 clat percentiles (usec): 00:30:25.717 | 1.00th=[ 6456], 5.00th=[ 6783], 10.00th=[ 6915], 20.00th=[ 7177], 00:30:25.717 | 30.00th=[ 7308], 40.00th=[ 7439], 50.00th=[ 7570], 60.00th=[ 7767], 00:30:25.717 | 70.00th=[ 7963], 80.00th=[ 8455], 90.00th=[ 9372], 95.00th=[10028], 00:30:25.717 | 99.00th=[11469], 99.50th=[12518], 99.90th=[15795], 99.95th=[16712], 00:30:25.717 | 99.99th=[16909] 00:30:25.717 bw ( KiB/s): min=28280, max=35992, per=99.96%, avg=33874.00, stdev=3742.09, samples=4 00:30:25.717 iops : min= 7070, max= 8998, avg=8468.50, stdev=935.52, samples=4 00:30:25.717 write: IOPS=8470, BW=33.1MiB/s (34.7MB/s)(66.4MiB/2007msec); 0 zone resets 00:30:25.717 slat (usec): min=2, max=157, avg= 2.60, stdev= 1.39 00:30:25.717 clat (usec): min=1429, max=15773, avg=7158.95, stdev=1014.26 00:30:25.717 lat (usec): min=1437, max=15775, avg=7161.55, stdev=1014.23 00:30:25.717 clat percentiles (usec): 00:30:25.717 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:30:25.717 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:30:25.717 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 8455], 95.00th=[ 9110], 00:30:25.717 | 99.00th=[10552], 99.50th=[11469], 99.90th=[14222], 99.95th=[15270], 00:30:25.717 | 99.99th=[15795] 00:30:25.717 bw ( KiB/s): min=29272, max=35456, per=99.99%, avg=33878.00, stdev=3070.93, samples=4 00:30:25.717 iops : min= 7318, max= 8864, avg=8469.50, stdev=767.73, samples=4 00:30:25.717 lat (msec) : 2=0.03%, 4=0.13%, 10=96.40%, 20=3.44% 00:30:25.717 cpu : usr=68.39%, sys=23.43%, ctx=8, majf=0, minf=5 00:30:25.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:25.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:25.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:25.717 issued rwts: total=17003,17000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:25.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:25.717 00:30:25.717 Run status group 0 (all jobs): 00:30:25.717 READ: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=66.4MiB (69.6MB), run=2007-2007msec 00:30:25.717 WRITE: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=66.4MiB (69.6MB), run=2007-2007msec 00:30:25.717 13:48:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:25.717 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:25.717 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:25.718 13:48:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:25.718 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:25.718 fio-3.35 00:30:25.718 Starting 1 thread 00:30:28.250 00:30:28.250 test: (groupid=0, jobs=1): err= 0: pid=106679: Wed May 15 13:48:41 2024 00:30:28.250 read: IOPS=7697, BW=120MiB/s (126MB/s)(241MiB/2005msec) 00:30:28.250 slat (usec): min=3, max=120, avg= 4.14, stdev= 1.92 00:30:28.250 clat (usec): min=2983, max=21456, avg=10015.56, stdev=2792.98 00:30:28.250 lat (usec): min=2987, max=21463, avg=10019.71, stdev=2793.25 00:30:28.250 clat percentiles (usec): 00:30:28.250 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7504], 00:30:28.250 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:30:28.250 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13566], 95.00th=[15008], 00:30:28.250 | 99.00th=[17957], 99.50th=[18744], 99.90th=[21103], 99.95th=[21103], 00:30:28.250 | 99.99th=[21365] 00:30:28.250 bw ( KiB/s): min=56864, max=65952, per=49.92%, avg=61480.00, stdev=3713.87, samples=4 00:30:28.250 iops : min= 3554, max= 4122, avg=3842.50, stdev=232.12, samples=4 00:30:28.250 write: IOPS=4430, BW=69.2MiB/s (72.6MB/s)(126MiB/1816msec); 0 zone resets 00:30:28.250 slat (usec): min=36, max=235, avg=39.69, stdev= 6.67 00:30:28.250 clat (usec): min=3089, max=22537, avg=11880.33, stdev=2351.33 00:30:28.250 lat (usec): min=3126, max=22589, avg=11920.02, stdev=2352.94 00:30:28.250 clat percentiles (usec): 00:30:28.250 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:30:28.250 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:30:28.250 | 70.00th=[12649], 80.00th=[13698], 90.00th=[14877], 95.00th=[16188], 00:30:28.250 | 99.00th=[19268], 99.50th=[20317], 99.90th=[21890], 99.95th=[22152], 00:30:28.250 | 99.99th=[22414] 00:30:28.250 bw ( KiB/s): min=59392, max=68992, per=90.42%, avg=64096.00, stdev=3922.14, samples=4 00:30:28.250 iops : min= 3712, max= 4312, avg=4006.00, stdev=245.13, samples=4 00:30:28.250 lat (msec) : 4=0.16%, 10=41.57%, 20=57.86%, 50=0.41% 00:30:28.250 cpu : usr=69.93%, sys=19.00%, ctx=6, majf=0, minf=1 00:30:28.250 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:30:28.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:28.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:28.250 issued rwts: total=15434,8046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:28.250 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:28.250 00:30:28.250 Run status group 0 (all jobs): 00:30:28.250 READ: bw=120MiB/s (126MB/s), 120MiB/s-120MiB/s (126MB/s-126MB/s), io=241MiB (253MB), run=2005-2005msec 00:30:28.250 WRITE: bw=69.2MiB/s (72.6MB/s), 69.2MiB/s-69.2MiB/s (72.6MB/s-72.6MB/s), io=126MiB (132MB), run=1816-1816msec 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.250 Nvme0n1 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=62c71b13-34f1-45d9-ad07-add98d39f2be 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb 62c71b13-34f1-45d9-ad07-add98d39f2be 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=62c71b13-34f1-45d9-ad07-add98d39f2be 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:28.250 { 00:30:28.250 "base_bdev": "Nvme0n1", 00:30:28.250 "block_size": 4096, 00:30:28.250 "cluster_size": 1073741824, 00:30:28.250 "free_clusters": 4, 00:30:28.250 "name": "lvs_0", 00:30:28.250 "total_data_clusters": 4, 00:30:28.250 "uuid": "62c71b13-34f1-45d9-ad07-add98d39f2be" 00:30:28.250 } 00:30:28.250 ]' 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="62c71b13-34f1-45d9-ad07-add98d39f2be") .free_clusters' 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="62c71b13-34f1-45d9-ad07-add98d39f2be") .cluster_size' 00:30:28.250 4096 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.250 7d95ea96-84ab-4b70-bf18-813b819a9247 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.250 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:28.509 13:48:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:28.509 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:28.509 fio-3.35 00:30:28.509 Starting 1 thread 00:30:31.041 00:30:31.041 test: (groupid=0, jobs=1): err= 0: pid=106758: Wed May 15 13:48:43 2024 00:30:31.041 read: IOPS=6311, BW=24.7MiB/s (25.9MB/s)(50.6MiB/2051msec) 00:30:31.041 slat (usec): min=2, max=364, avg= 2.87, stdev= 4.11 00:30:31.041 clat (usec): min=4009, max=59541, avg=10633.44, stdev=3268.66 00:30:31.041 lat (usec): min=4019, max=59543, avg=10636.31, stdev=3268.59 00:30:31.041 clat percentiles (usec): 00:30:31.041 | 1.00th=[ 8455], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:30:31.041 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:30:31.041 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:30:31.041 | 99.00th=[12649], 99.50th=[50070], 99.90th=[58459], 99.95th=[59507], 00:30:31.041 | 99.99th=[59507] 00:30:31.041 bw ( KiB/s): min=24792, max=26416, per=100.00%, avg=25746.75, stdev=711.07, samples=4 00:30:31.041 iops : min= 6198, max= 6604, avg=6436.50, stdev=177.63, samples=4 00:30:31.041 write: IOPS=6318, BW=24.7MiB/s (25.9MB/s)(50.6MiB/2051msec); 0 zone resets 00:30:31.041 slat (usec): min=2, max=259, avg= 3.00, stdev= 2.59 00:30:31.041 clat (usec): min=2476, max=59153, avg=9537.85, stdev=3191.09 00:30:31.041 lat (usec): min=2490, max=59156, avg=9540.86, stdev=3191.07 00:30:31.041 clat percentiles (usec): 00:30:31.041 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:30:31.041 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:30:31.041 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:30:31.041 | 99.00th=[11076], 99.50th=[11731], 99.90th=[56886], 99.95th=[58459], 00:30:31.041 | 99.99th=[58983] 00:30:31.041 bw ( KiB/s): min=25520, max=25992, per=100.00%, avg=25783.00, stdev=217.68, samples=4 00:30:31.041 iops : min= 6380, max= 6498, avg=6445.75, stdev=54.42, samples=4 00:30:31.041 lat (msec) : 4=0.03%, 10=56.43%, 20=43.04%, 100=0.49% 00:30:31.041 cpu : usr=69.56%, sys=23.12%, ctx=16, majf=0, minf=5 00:30:31.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:31.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:31.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:31.041 issued rwts: total=12945,12960,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:31.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:31.041 00:30:31.041 Run status group 0 (all jobs): 00:30:31.041 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=50.6MiB (53.0MB), run=2051-2051msec 00:30:31.041 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=50.6MiB (53.1MB), run=2051-2051msec 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=f8671d12-ed53-4848-a7a1-723c388a4ad1 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb f8671d12-ed53-4848-a7a1-723c388a4ad1 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=f8671d12-ed53-4848-a7a1-723c388a4ad1 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:30:31.041 { 00:30:31.041 "base_bdev": "Nvme0n1", 00:30:31.041 "block_size": 4096, 00:30:31.041 "cluster_size": 1073741824, 00:30:31.041 "free_clusters": 0, 00:30:31.041 "name": "lvs_0", 00:30:31.041 "total_data_clusters": 4, 00:30:31.041 "uuid": "62c71b13-34f1-45d9-ad07-add98d39f2be" 00:30:31.041 }, 00:30:31.041 { 00:30:31.041 "base_bdev": "7d95ea96-84ab-4b70-bf18-813b819a9247", 00:30:31.041 "block_size": 4096, 00:30:31.041 "cluster_size": 4194304, 00:30:31.041 "free_clusters": 1022, 00:30:31.041 "name": "lvs_n_0", 00:30:31.041 "total_data_clusters": 1022, 00:30:31.041 "uuid": "f8671d12-ed53-4848-a7a1-723c388a4ad1" 00:30:31.041 } 00:30:31.041 ]' 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="f8671d12-ed53-4848-a7a1-723c388a4ad1") .free_clusters' 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:30:31.041 13:48:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="f8671d12-ed53-4848-a7a1-723c388a4ad1") .cluster_size' 00:30:31.041 4088 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.041 099a6f94-aef9-404c-ba66-96905fc6b4d1 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:31.041 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:30:31.042 13:48:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:31.300 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:31.300 fio-3.35 00:30:31.300 Starting 1 thread 00:30:33.829 00:30:33.829 test: (groupid=0, jobs=1): err= 0: pid=106813: Wed May 15 13:48:46 2024 00:30:33.829 read: IOPS=5718, BW=22.3MiB/s (23.4MB/s)(44.9MiB/2010msec) 00:30:33.829 slat (usec): min=2, max=316, avg= 2.55, stdev= 3.68 00:30:33.829 clat (usec): min=4407, max=19993, avg=11737.86, stdev=999.36 00:30:33.829 lat (usec): min=4416, max=19995, avg=11740.41, stdev=999.03 00:30:33.829 clat percentiles (usec): 00:30:33.829 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:30:33.829 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:30:33.829 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:30:33.829 | 99.00th=[14091], 99.50th=[14484], 99.90th=[18482], 99.95th=[19006], 00:30:33.829 | 99.99th=[20055] 00:30:33.829 bw ( KiB/s): min=21968, max=23416, per=100.00%, avg=22878.00, stdev=636.54, samples=4 00:30:33.829 iops : min= 5492, max= 5854, avg=5719.50, stdev=159.13, samples=4 00:30:33.829 write: IOPS=5707, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2010msec); 0 zone resets 00:30:33.829 slat (usec): min=2, max=192, avg= 2.64, stdev= 2.08 00:30:33.829 clat (usec): min=2193, max=21221, avg=10557.90, stdev=959.35 00:30:33.829 lat (usec): min=2207, max=21224, avg=10560.54, stdev=959.19 00:30:33.829 clat percentiles (usec): 00:30:33.829 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:30:33.829 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:30:33.829 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:30:33.829 | 99.00th=[12649], 99.50th=[13042], 99.90th=[18744], 99.95th=[20055], 00:30:33.829 | 99.99th=[21103] 00:30:33.829 bw ( KiB/s): min=22656, max=22960, per=99.87%, avg=22802.00, stdev=160.32, samples=4 00:30:33.829 iops : min= 5664, max= 5740, avg=5700.50, stdev=40.08, samples=4 00:30:33.829 lat (msec) : 4=0.04%, 10=13.74%, 20=86.20%, 50=0.02% 00:30:33.829 cpu : usr=71.28%, sys=23.20%, ctx=3, majf=0, minf=5 00:30:33.829 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:33.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:33.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:33.829 issued rwts: total=11494,11473,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:33.829 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:33.829 00:30:33.829 Run status group 0 (all jobs): 00:30:33.830 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2010-2010msec 00:30:33.830 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2010-2010msec 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.830 13:48:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.396 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.396 13:48:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:34.397 rmmod nvme_tcp 00:30:34.397 rmmod nvme_fabrics 00:30:34.397 rmmod nvme_keyring 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 106557 ']' 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 106557 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 106557 ']' 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 106557 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 106557 00:30:34.397 killing process with pid 106557 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 106557' 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 106557 00:30:34.397 [2024-05-15 13:48:47.351007] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:34.397 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 106557 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:34.655 00:30:34.655 real 0m13.173s 00:30:34.655 user 0m55.143s 00:30:34.655 sys 0m3.568s 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:34.655 ************************************ 00:30:34.655 END TEST nvmf_fio_host 00:30:34.655 ************************************ 00:30:34.655 13:48:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.655 13:48:47 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.655 13:48:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:30:34.655 13:48:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.655 13:48:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.655 ************************************ 00:30:34.655 START TEST nvmf_failover 00:30:34.655 ************************************ 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:34.655 * Looking for test storage... 00:30:34.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.655 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:34.914 13:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:34.915 Cannot find device "nvmf_tgt_br" 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:34.915 Cannot find device "nvmf_tgt_br2" 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:34.915 Cannot find device "nvmf_tgt_br" 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:34.915 Cannot find device "nvmf_tgt_br2" 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:34.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:34.915 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:34.915 13:48:47 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:34.915 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:34.915 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:35.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:35.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:30:35.174 00:30:35.174 --- 10.0.0.2 ping statistics --- 00:30:35.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.174 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:35.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:35.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:30:35.174 00:30:35.174 --- 10.0.0.3 ping statistics --- 00:30:35.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.174 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:35.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:35.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:30:35.174 00:30:35.174 --- 10.0.0.1 ping statistics --- 00:30:35.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:35.174 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:35.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=107027 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 107027 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 107027 ']' 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:35.174 13:48:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:35.174 [2024-05-15 13:48:48.170226] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:30:35.174 [2024-05-15 13:48:48.170341] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:35.433 [2024-05-15 13:48:48.296415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:35.433 [2024-05-15 13:48:48.318000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:35.433 [2024-05-15 13:48:48.426804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.433 [2024-05-15 13:48:48.427134] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.433 [2024-05-15 13:48:48.427380] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.433 [2024-05-15 13:48:48.427615] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.433 [2024-05-15 13:48:48.427738] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.433 [2024-05-15 13:48:48.427980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.433 [2024-05-15 13:48:48.428159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.433 [2024-05-15 13:48:48.428165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:36.368 13:48:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:36.368 13:48:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:36.368 13:48:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:36.368 13:48:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:36.368 13:48:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:36.368 13:48:49 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:36.368 13:48:49 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:36.627 [2024-05-15 13:48:49.537094] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:36.627 13:48:49 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:36.886 Malloc0 00:30:36.886 13:48:49 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:37.143 13:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:37.401 13:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:37.659 [2024-05-15 13:48:50.629746] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:37.659 [2024-05-15 13:48:50.630019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:37.659 13:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:37.917 [2024-05-15 13:48:50.910202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:37.917 13:48:50 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:38.175 [2024-05-15 13:48:51.226477] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=107144 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 107144 /var/tmp/bdevperf.sock 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 107144 ']' 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:38.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:38.175 13:48:51 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:39.551 13:48:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:39.551 13:48:52 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:39.551 13:48:52 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:39.551 NVMe0n1 00:30:39.551 13:48:52 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:40.116 00:30:40.116 13:48:52 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=107193 00:30:40.116 13:48:52 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:40.116 13:48:52 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:41.050 13:48:53 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:41.311 [2024-05-15 13:48:54.256878] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.256945] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.256958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.256967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.256976] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.256985] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.256993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257002] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.311 [2024-05-15 13:48:54.257168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257201] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257209] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257251] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257260] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257279] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257287] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257297] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257381] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257431] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257481] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257490] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257516] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257665] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257698] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257828] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257836] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257934] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257951] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.257992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.258000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.258008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.258017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.258025] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 [2024-05-15 13:48:54.258033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0cf70 is same with the state(5) to be set 00:30:41.312 13:48:54 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:44.608 13:48:57 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.608 00:30:44.608 13:48:57 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:44.865 [2024-05-15 13:48:57.911639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.911996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.912005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.912013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.865 [2024-05-15 13:48:57.912022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.866 [2024-05-15 13:48:57.912031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.866 [2024-05-15 13:48:57.912040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.866 [2024-05-15 13:48:57.912048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.866 [2024-05-15 13:48:57.912057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.866 [2024-05-15 13:48:57.912066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0e670 is same with the state(5) to be set 00:30:44.866 13:48:57 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:48.178 13:49:00 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.178 [2024-05-15 13:49:01.198330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.178 13:49:01 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:49.551 13:49:02 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:49.551 [2024-05-15 13:49:02.552954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553115] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553123] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553149] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553158] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553220] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553265] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553309] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553369] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553386] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553420] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553541] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.551 [2024-05-15 13:49:02.553620] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553685] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553702] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553719] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553926] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553953] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553978] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.553996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.554005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 [2024-05-15 13:49:02.554014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec7690 is same with the state(5) to be set 00:30:49.552 13:49:02 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 107193 00:30:56.122 0 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 107144 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 107144 ']' 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 107144 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107144 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107144' 00:30:56.122 killing process with pid 107144 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 107144 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 107144 00:30:56.122 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:56.122 [2024-05-15 13:48:51.311069] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:30:56.122 [2024-05-15 13:48:51.311202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107144 ] 00:30:56.122 [2024-05-15 13:48:51.433508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:56.122 [2024-05-15 13:48:51.449897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.122 [2024-05-15 13:48:51.552271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.122 Running I/O for 15 seconds... 00:30:56.122 [2024-05-15 13:48:54.258498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.258974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.258987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.122 [2024-05-15 13:48:54.259238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.122 [2024-05-15 13:48:54.259251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.259972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.259987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.123 [2024-05-15 13:48:54.260481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.123 [2024-05-15 13:48:54.260501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.124 [2024-05-15 13:48:54.260515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.260977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.260997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.124 [2024-05-15 13:48:54.261639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.124 [2024-05-15 13:48:54.261667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.124 [2024-05-15 13:48:54.261705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.124 [2024-05-15 13:48:54.261720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.261973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.261990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:54.262415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262434] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16234d0 is same with the state(5) to be set 00:30:56.125 [2024-05-15 13:48:54.262453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.125 [2024-05-15 13:48:54.262463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.125 [2024-05-15 13:48:54.262474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77592 len:8 PRP1 0x0 PRP2 0x0 00:30:56.125 [2024-05-15 13:48:54.262487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262557] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16234d0 was disconnected and freed. reset controller. 00:30:56.125 [2024-05-15 13:48:54.262575] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:56.125 [2024-05-15 13:48:54.262644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:54.262666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:54.262695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:54.262722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:54.262754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:54.262768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.125 [2024-05-15 13:48:54.262836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fde00 (9): Bad file descriptor 00:30:56.125 [2024-05-15 13:48:54.266787] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.125 [2024-05-15 13:48:54.303993] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:56.125 [2024-05-15 13:48:57.909591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:57.909708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:57.909789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:57.909820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:57.909846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:57.909872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:57.909897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.125 [2024-05-15 13:48:57.909922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:57.909946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fde00 is same with the state(5) to be set 00:30:56.125 [2024-05-15 13:48:57.912510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:57.912545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:57.912572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:57.912588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:57.912619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:57.912637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.125 [2024-05-15 13:48:57.912653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.125 [2024-05-15 13:48:57.912667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.912980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.912995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.126 [2024-05-15 13:48:57.913701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.126 [2024-05-15 13:48:57.913715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.913977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.913990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.127 [2024-05-15 13:48:57.914939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.127 [2024-05-15 13:48:57.914952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.914967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.914983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.914998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.128 [2024-05-15 13:48:57.915360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.915978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.915993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.916009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.916022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.916037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.916051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.916066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.916079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.916101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.916124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.916141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.916160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.916175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.128 [2024-05-15 13:48:57.916189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.128 [2024-05-15 13:48:57.916204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.129 [2024-05-15 13:48:57.916496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.129 [2024-05-15 13:48:57.916543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.129 [2024-05-15 13:48:57.916554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83728 len:8 PRP1 0x0 PRP2 0x0 00:30:56.129 [2024-05-15 13:48:57.916567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:48:57.916655] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f83b0 was disconnected and freed. reset controller. 00:30:56.129 [2024-05-15 13:48:57.916676] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:56.129 [2024-05-15 13:48:57.916689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.129 [2024-05-15 13:48:57.920734] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.129 [2024-05-15 13:48:57.920780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fde00 (9): Bad file descriptor 00:30:56.129 [2024-05-15 13:48:57.958965] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:56.129 [2024-05-15 13:49:02.554425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.554984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.554998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.129 [2024-05-15 13:49:02.555320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.129 [2024-05-15 13:49:02.555333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.555976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.555990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:32848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.130 [2024-05-15 13:49:02.556584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.130 [2024-05-15 13:49:02.556617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.556968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.556993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:33120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:33128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:33136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:56.131 [2024-05-15 13:49:02.557403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:33144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:33160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:33192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.131 [2024-05-15 13:49:02.557638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.131 [2024-05-15 13:49:02.557651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:33216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:33224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:33248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:33256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:33264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.557978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.557993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:33296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:33304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:33312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:33320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:33328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:33336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:33344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:33360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:33368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:33376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:33392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:33400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:56.132 [2024-05-15 13:49:02.558389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:56.132 [2024-05-15 13:49:02.558440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:56.132 [2024-05-15 13:49:02.558451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33408 len:8 PRP1 0x0 PRP2 0x0 00:30:56.132 [2024-05-15 13:49:02.558465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558534] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1624260 was disconnected and freed. reset controller. 00:30:56.132 [2024-05-15 13:49:02.558553] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:56.132 [2024-05-15 13:49:02.558624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.132 [2024-05-15 13:49:02.558657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.132 [2024-05-15 13:49:02.558687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.132 [2024-05-15 13:49:02.558714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.132 [2024-05-15 13:49:02.558741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.132 [2024-05-15 13:49:02.558755] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:56.132 [2024-05-15 13:49:02.562714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:56.132 [2024-05-15 13:49:02.562757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15fde00 (9): Bad file descriptor 00:30:56.132 [2024-05-15 13:49:02.604341] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:56.132 00:30:56.132 Latency(us) 00:30:56.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.132 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:56.132 Verification LBA range: start 0x0 length 0x4000 00:30:56.132 NVMe0n1 : 15.01 8929.90 34.88 235.31 0.00 13933.94 629.29 16681.89 00:30:56.132 =================================================================================================================== 00:30:56.132 Total : 8929.90 34.88 235.31 0.00 13933.94 629.29 16681.89 00:30:56.132 Received shutdown signal, test time was about 15.000000 seconds 00:30:56.132 00:30:56.132 Latency(us) 00:30:56.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:56.132 =================================================================================================================== 00:30:56.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:56.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=107392 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 107392 /var/tmp/bdevperf.sock 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 107392 ']' 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:56.132 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:56.133 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:56.133 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:56.133 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:56.133 13:49:08 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:56.392 13:49:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:56.392 13:49:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:30:56.392 13:49:09 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:56.961 [2024-05-15 13:49:09.768688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:56.961 13:49:09 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:56.961 [2024-05-15 13:49:10.028864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:56.961 13:49:10 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.529 NVMe0n1 00:30:57.529 13:49:10 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:57.788 00:30:57.788 13:49:10 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.047 00:30:58.047 13:49:11 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:58.047 13:49:11 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:58.305 13:49:11 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:58.563 13:49:11 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:01.890 13:49:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:01.890 13:49:14 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:01.890 13:49:14 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=107532 00:31:01.890 13:49:14 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.890 13:49:14 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 107532 00:31:03.262 0 00:31:03.262 13:49:15 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:03.262 [2024-05-15 13:49:08.485543] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:03.262 [2024-05-15 13:49:08.485665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107392 ] 00:31:03.262 [2024-05-15 13:49:08.604506] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:03.262 [2024-05-15 13:49:08.624814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.262 [2024-05-15 13:49:08.723706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:03.262 [2024-05-15 13:49:11.546574] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:03.262 [2024-05-15 13:49:11.546711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.262 [2024-05-15 13:49:11.546736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.262 [2024-05-15 13:49:11.546755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.262 [2024-05-15 13:49:11.546769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.262 [2024-05-15 13:49:11.546783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.262 [2024-05-15 13:49:11.546796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.262 [2024-05-15 13:49:11.546816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:03.262 [2024-05-15 13:49:11.546831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:03.262 [2024-05-15 13:49:11.546845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:03.262 [2024-05-15 13:49:11.546898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:03.262 [2024-05-15 13:49:11.546930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d7e00 (9): Bad file descriptor 00:31:03.262 [2024-05-15 13:49:11.557949] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:03.262 Running I/O for 1 seconds... 00:31:03.262 00:31:03.262 Latency(us) 00:31:03.262 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:03.262 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:03.262 Verification LBA range: start 0x0 length 0x4000 00:31:03.262 NVMe0n1 : 1.02 9163.53 35.80 0.00 0.00 13899.34 2085.24 14596.65 00:31:03.262 =================================================================================================================== 00:31:03.262 Total : 9163.53 35.80 0.00 0.00 13899.34 2085.24 14596.65 00:31:03.262 13:49:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:03.262 13:49:15 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:03.262 13:49:16 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:03.526 13:49:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:03.526 13:49:16 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:03.785 13:49:16 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:04.351 13:49:17 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 107392 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 107392 ']' 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 107392 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107392 00:31:07.634 killing process with pid 107392 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107392' 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 107392 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 107392 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:07.634 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:07.892 rmmod nvme_tcp 00:31:07.892 rmmod nvme_fabrics 00:31:07.892 rmmod nvme_keyring 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 107027 ']' 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 107027 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 107027 ']' 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 107027 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:07.892 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107027 00:31:08.151 killing process with pid 107027 00:31:08.151 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:08.151 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:08.151 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107027' 00:31:08.151 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 107027 00:31:08.151 [2024-05-15 13:49:20.991379] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:08.151 13:49:20 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 107027 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.151 13:49:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.411 13:49:21 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:08.411 00:31:08.411 real 0m33.598s 00:31:08.411 user 2m11.310s 00:31:08.411 sys 0m4.879s 00:31:08.411 13:49:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:08.411 13:49:21 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:08.411 ************************************ 00:31:08.411 END TEST nvmf_failover 00:31:08.411 ************************************ 00:31:08.411 13:49:21 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:08.411 13:49:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:08.411 13:49:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:08.411 13:49:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:08.411 ************************************ 00:31:08.411 START TEST nvmf_host_discovery 00:31:08.411 ************************************ 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:08.411 * Looking for test storage... 00:31:08.411 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:08.411 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:08.412 Cannot find device "nvmf_tgt_br" 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:08.412 Cannot find device "nvmf_tgt_br2" 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:08.412 Cannot find device "nvmf_tgt_br" 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:08.412 Cannot find device "nvmf_tgt_br2" 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:31:08.412 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:08.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:08.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:08.671 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:08.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:31:08.672 00:31:08.672 --- 10.0.0.2 ping statistics --- 00:31:08.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.672 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:08.672 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:08.672 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:31:08.672 00:31:08.672 --- 10.0.0.3 ping statistics --- 00:31:08.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.672 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:08.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:31:08.672 00:31:08.672 --- 10.0.0.1 ping statistics --- 00:31:08.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.672 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:08.672 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=107841 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 107841 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 107841 ']' 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:08.939 13:49:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:08.939 [2024-05-15 13:49:21.848339] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:08.939 [2024-05-15 13:49:21.848428] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.939 [2024-05-15 13:49:21.973487] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:08.939 [2024-05-15 13:49:21.988879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.211 [2024-05-15 13:49:22.087970] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.211 [2024-05-15 13:49:22.088036] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.211 [2024-05-15 13:49:22.088047] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.211 [2024-05-15 13:49:22.088056] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.211 [2024-05-15 13:49:22.088063] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.211 [2024-05-15 13:49:22.088086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.776 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:09.776 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:09.776 13:49:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:09.776 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:09.776 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 [2024-05-15 13:49:22.909507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 [2024-05-15 13:49:22.917427] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:10.035 [2024-05-15 13:49:22.917682] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 null0 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 null1 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=107891 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 107891 /tmp/host.sock 00:31:10.035 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 107891 ']' 00:31:10.036 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:31:10.036 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:10.036 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:10.036 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:10.036 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:10.036 13:49:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:10.036 [2024-05-15 13:49:22.996031] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:10.036 [2024-05-15 13:49:22.996125] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107891 ] 00:31:10.036 [2024-05-15 13:49:23.114565] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:10.295 [2024-05-15 13:49:23.136121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.295 [2024-05-15 13:49:23.239526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.243 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.244 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.502 [2024-05-15 13:49:24.414021] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:11.502 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:31:11.761 13:49:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:12.060 [2024-05-15 13:49:25.032970] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:12.060 [2024-05-15 13:49:25.033022] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:12.060 [2024-05-15 13:49:25.033043] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:12.060 [2024-05-15 13:49:25.119101] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:12.318 [2024-05-15 13:49:25.175229] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:12.318 [2024-05-15 13:49:25.175277] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:12.885 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.144 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:13.144 13:49:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:13.144 13:49:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.144 [2024-05-15 13:49:26.006750] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:13.144 [2024-05-15 13:49:26.007664] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:13.144 [2024-05-15 13:49:26.007705] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:13.144 [2024-05-15 13:49:26.094747] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.144 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:13.145 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:13.145 13:49:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:13.145 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.145 [2024-05-15 13:49:26.155056] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:13.145 [2024-05-15 13:49:26.155094] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:13.145 [2024-05-15 13:49:26.155103] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:13.145 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:13.145 13:49:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:31:14.521 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.521 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:14.521 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:14.521 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:14.521 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.522 [2024-05-15 13:49:27.308002] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:14.522 [2024-05-15 13:49:27.308046] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:14.522 [2024-05-15 13:49:27.312672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:14.522 id:0 cdw10:00000000 cdw11:00000000 00:31:14.522 [2024-05-15 13:49:27.312727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.522 [2024-05-15 13:49:27.312748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.522 [2024-05-15 13:49:27.312765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.522 [2024-05-15 13:49:27.312781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.522 [2024-05-15 13:49:27.312795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.522 [2024-05-15 13:49:27.312809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:14.522 [2024-05-15 13:49:27.312824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:14.522 [2024-05-15 13:49:27.312839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:14.522 [2024-05-15 13:49:27.322598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.522 [2024-05-15 13:49:27.332631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.522 [2024-05-15 13:49:27.332776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.332828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.332845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1724fb0 with addr=10.0.0.2, port=4420 00:31:14.522 [2024-05-15 13:49:27.332857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.522 [2024-05-15 13:49:27.332877] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.522 [2024-05-15 13:49:27.332892] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.522 [2024-05-15 13:49:27.332902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.522 [2024-05-15 13:49:27.332913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.522 [2024-05-15 13:49:27.332929] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.522 [2024-05-15 13:49:27.342702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.522 [2024-05-15 13:49:27.342816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.342865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.342881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1724fb0 with addr=10.0.0.2, port=4420 00:31:14.522 [2024-05-15 13:49:27.342892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.522 [2024-05-15 13:49:27.342910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.522 [2024-05-15 13:49:27.342924] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.522 [2024-05-15 13:49:27.342933] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.522 [2024-05-15 13:49:27.342944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.522 [2024-05-15 13:49:27.342960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.522 [2024-05-15 13:49:27.352768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.522 [2024-05-15 13:49:27.352867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.352914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.352930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1724fb0 with addr=10.0.0.2, port=4420 00:31:14.522 [2024-05-15 13:49:27.352941] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.522 [2024-05-15 13:49:27.352958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.522 [2024-05-15 13:49:27.352972] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.522 [2024-05-15 13:49:27.352981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.522 [2024-05-15 13:49:27.352991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.522 [2024-05-15 13:49:27.353006] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.522 [2024-05-15 13:49:27.362833] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.522 [2024-05-15 13:49:27.362930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.362975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.522 [2024-05-15 13:49:27.362991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1724fb0 with addr=10.0.0.2, port=4420 00:31:14.522 [2024-05-15 13:49:27.363001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.522 [2024-05-15 13:49:27.363018] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.522 [2024-05-15 13:49:27.363032] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.522 [2024-05-15 13:49:27.363040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.522 [2024-05-15 13:49:27.363050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.522 [2024-05-15 13:49:27.363075] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:14.522 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.523 [2024-05-15 13:49:27.372904] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.523 [2024-05-15 13:49:27.373053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.523 [2024-05-15 13:49:27.373128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.523 [2024-05-15 13:49:27.373154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1724fb0 with addr=10.0.0.2, port=4420 00:31:14.523 [2024-05-15 13:49:27.373174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.523 [2024-05-15 13:49:27.373200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.523 [2024-05-15 13:49:27.373240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.523 [2024-05-15 13:49:27.373257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.523 [2024-05-15 13:49:27.373274] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.523 [2024-05-15 13:49:27.373298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.523 [2024-05-15 13:49:27.382985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.523 [2024-05-15 13:49:27.383081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.523 [2024-05-15 13:49:27.383134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.523 [2024-05-15 13:49:27.383150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1724fb0 with addr=10.0.0.2, port=4420 00:31:14.523 [2024-05-15 13:49:27.383161] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.523 [2024-05-15 13:49:27.383178] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.523 [2024-05-15 13:49:27.383192] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.523 [2024-05-15 13:49:27.383200] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.523 [2024-05-15 13:49:27.383210] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.523 [2024-05-15 13:49:27.383225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.523 [2024-05-15 13:49:27.393043] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:14.523 [2024-05-15 13:49:27.393124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.523 [2024-05-15 13:49:27.393168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:14.523 [2024-05-15 13:49:27.393184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1724fb0 with addr=10.0.0.2, port=4420 00:31:14.523 [2024-05-15 13:49:27.393195] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1724fb0 is same with the state(5) to be set 00:31:14.523 [2024-05-15 13:49:27.393210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1724fb0 (9): Bad file descriptor 00:31:14.523 [2024-05-15 13:49:27.393224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:14.523 [2024-05-15 13:49:27.393233] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:14.523 [2024-05-15 13:49:27.393243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:14.523 [2024-05-15 13:49:27.393258] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:14.523 [2024-05-15 13:49:27.394363] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:14.523 [2024-05-15 13:49:27.394392] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:14.523 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:14.782 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:14.783 13:49:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.718 [2024-05-15 13:49:28.763638] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:15.718 [2024-05-15 13:49:28.763685] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:15.718 [2024-05-15 13:49:28.763705] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:15.977 [2024-05-15 13:49:28.850777] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:15.977 [2024-05-15 13:49:28.918304] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:15.977 [2024-05-15 13:49:28.918377] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.977 2024/05/15 13:49:28 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:31:15.977 request: 00:31:15.977 { 00:31:15.977 "method": "bdev_nvme_start_discovery", 00:31:15.977 "params": { 00:31:15.977 "name": "nvme", 00:31:15.977 "trtype": "tcp", 00:31:15.977 "traddr": "10.0.0.2", 00:31:15.977 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:15.977 "adrfam": "ipv4", 00:31:15.977 "trsvcid": "8009", 00:31:15.977 "wait_for_attach": true 00:31:15.977 } 00:31:15.977 } 00:31:15.977 Got JSON-RPC error response 00:31:15.977 GoRPCClient: error on JSON-RPC call 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:15.977 13:49:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:15.978 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.236 2024/05/15 13:49:29 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:31:16.236 request: 00:31:16.236 { 00:31:16.236 "method": "bdev_nvme_start_discovery", 00:31:16.236 "params": { 00:31:16.236 "name": "nvme_second", 00:31:16.236 "trtype": "tcp", 00:31:16.236 "traddr": "10.0.0.2", 00:31:16.236 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:16.236 "adrfam": "ipv4", 00:31:16.236 "trsvcid": "8009", 00:31:16.236 "wait_for_attach": true 00:31:16.236 } 00:31:16.236 } 00:31:16.236 Got JSON-RPC error response 00:31:16.236 GoRPCClient: error on JSON-RPC call 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.236 13:49:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.169 [2024-05-15 13:49:30.207412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.169 [2024-05-15 13:49:30.207501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.169 [2024-05-15 13:49:30.207521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1761860 with addr=10.0.0.2, port=8010 00:31:17.169 [2024-05-15 13:49:30.207544] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:17.170 [2024-05-15 13:49:30.207554] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:17.170 [2024-05-15 13:49:30.207564] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:18.543 [2024-05-15 13:49:31.207430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.543 [2024-05-15 13:49:31.207530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:18.543 [2024-05-15 13:49:31.207549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1761860 with addr=10.0.0.2, port=8010 00:31:18.543 [2024-05-15 13:49:31.207573] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:18.543 [2024-05-15 13:49:31.207583] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:18.543 [2024-05-15 13:49:31.207593] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:19.475 [2024-05-15 13:49:32.207272] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:19.475 2024/05/15 13:49:32 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:31:19.475 request: 00:31:19.475 { 00:31:19.475 "method": "bdev_nvme_start_discovery", 00:31:19.475 "params": { 00:31:19.475 "name": "nvme_second", 00:31:19.475 "trtype": "tcp", 00:31:19.475 "traddr": "10.0.0.2", 00:31:19.475 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:19.475 "adrfam": "ipv4", 00:31:19.475 "trsvcid": "8010", 00:31:19.475 "attach_timeout_ms": 3000 00:31:19.475 } 00:31:19.475 } 00:31:19.475 Got JSON-RPC error response 00:31:19.475 GoRPCClient: error on JSON-RPC call 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 107891 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.475 rmmod nvme_tcp 00:31:19.475 rmmod nvme_fabrics 00:31:19.475 rmmod nvme_keyring 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 107841 ']' 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 107841 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 107841 ']' 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 107841 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 107841 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 107841' 00:31:19.475 killing process with pid 107841 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 107841 00:31:19.475 [2024-05-15 13:49:32.389403] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:19.475 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 107841 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.732 13:49:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:19.732 00:31:19.732 real 0m11.312s 00:31:19.732 user 0m22.337s 00:31:19.733 sys 0m1.704s 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.733 ************************************ 00:31:19.733 END TEST nvmf_host_discovery 00:31:19.733 ************************************ 00:31:19.733 13:49:32 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:19.733 13:49:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:31:19.733 13:49:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:19.733 13:49:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:19.733 ************************************ 00:31:19.733 START TEST nvmf_host_multipath_status 00:31:19.733 ************************************ 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:19.733 * Looking for test storage... 00:31:19.733 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:19.733 Cannot find device "nvmf_tgt_br" 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:19.733 Cannot find device "nvmf_tgt_br2" 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:19.733 Cannot find device "nvmf_tgt_br" 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:31:19.733 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:19.991 Cannot find device "nvmf_tgt_br2" 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:19.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:19.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:19.991 13:49:32 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:19.991 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:20.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:31:20.249 00:31:20.249 --- 10.0.0.2 ping statistics --- 00:31:20.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.249 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:20.249 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:20.249 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:31:20.249 00:31:20.249 --- 10.0.0.3 ping statistics --- 00:31:20.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.249 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:20.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:31:20.249 00:31:20.249 --- 10.0.0.1 ping statistics --- 00:31:20.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.249 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:20.249 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=108380 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 108380 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 108380 ']' 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:20.250 13:49:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:20.250 [2024-05-15 13:49:33.229152] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:20.250 [2024-05-15 13:49:33.229298] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.511 [2024-05-15 13:49:33.366165] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:20.511 [2024-05-15 13:49:33.385177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:20.511 [2024-05-15 13:49:33.484516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.511 [2024-05-15 13:49:33.484569] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.511 [2024-05-15 13:49:33.484581] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.511 [2024-05-15 13:49:33.484590] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.511 [2024-05-15 13:49:33.484598] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.511 [2024-05-15 13:49:33.484712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.511 [2024-05-15 13:49:33.484960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=108380 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.444 [2024-05-15 13:49:34.474475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.444 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:21.701 Malloc0 00:31:21.701 13:49:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:21.959 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:22.217 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.474 [2024-05-15 13:49:35.550044] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:22.474 [2024-05-15 13:49:35.550327] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.732 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:22.732 [2024-05-15 13:49:35.822558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=108484 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 108484 /var/tmp/bdevperf.sock 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 108484 ']' 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:23.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:23.057 13:49:35 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:24.012 13:49:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:24.012 13:49:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:31:24.013 13:49:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:24.270 13:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:24.527 Nvme0n1 00:31:24.527 13:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:25.094 Nvme0n1 00:31:25.094 13:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:25.094 13:49:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:26.998 13:49:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:26.998 13:49:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:27.255 13:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:27.513 13:49:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:28.447 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:28.447 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.447 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.447 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.706 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.706 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:28.706 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.706 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.963 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.963 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.963 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.963 13:49:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:29.222 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.222 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:29.222 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.222 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.479 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.480 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.480 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.480 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.737 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.737 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.737 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.737 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.995 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.995 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:29.995 13:49:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:30.253 13:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:30.511 13:49:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:31.444 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:31.444 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:31.444 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.444 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.702 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.702 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:31.702 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.702 13:49:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:32.268 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.268 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:32.268 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.268 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.526 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.526 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.526 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.526 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.785 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.785 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.785 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.785 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:33.043 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.043 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:33.043 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.043 13:49:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:33.301 13:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:33.301 13:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:33.301 13:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:33.624 13:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:33.883 13:49:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:34.816 13:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:34.816 13:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:34.816 13:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.816 13:49:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:35.075 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.075 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:35.075 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.075 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:35.332 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:35.332 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:35.332 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.332 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:35.589 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.589 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:35.589 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.590 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.847 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.847 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.847 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.847 13:49:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:36.412 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.412 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:36.412 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.412 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:36.412 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.412 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:36.412 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:36.669 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:36.927 13:49:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:37.894 13:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:37.894 13:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:37.894 13:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.894 13:49:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:38.172 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.172 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:38.172 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.172 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:38.431 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:38.431 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:38.431 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:38.431 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.691 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.691 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:38.691 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.691 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.949 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.949 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.949 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.949 13:49:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:39.208 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:39.208 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:39.208 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:39.208 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:39.466 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:39.466 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:39.466 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:39.724 13:49:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:39.982 13:49:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.370 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:41.629 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.629 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:41.629 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.629 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:41.888 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.888 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:41.888 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.888 13:49:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:42.146 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:42.146 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:42.146 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.146 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:42.405 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.405 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:42.405 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:42.405 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:42.694 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:42.694 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:42.694 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:42.964 13:49:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:43.222 13:49:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:44.157 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:44.157 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:44.157 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.157 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:44.415 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:44.415 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:44.415 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.415 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:44.674 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.674 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:44.674 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.674 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.932 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.932 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.932 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.932 13:49:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:45.191 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:45.191 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:45.191 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.191 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:45.449 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:45.449 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:45.449 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:45.450 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:46.016 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.016 13:49:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:46.016 13:49:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:46.016 13:49:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:46.582 13:49:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:46.582 13:49:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:47.957 13:50:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.215 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.215 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:48.215 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.215 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:48.474 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.474 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:48.474 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:48.474 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.733 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.733 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:48.733 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.733 13:50:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:48.992 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:48.992 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:48.992 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:48.992 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:49.559 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:49.559 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:49.559 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:49.559 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:49.818 13:50:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:51.195 13:50:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:51.195 13:50:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:51.195 13:50:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.195 13:50:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:51.195 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:51.195 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:51.195 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:51.195 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.464 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.464 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:51.464 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.464 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:51.723 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:51.723 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:51.723 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:51.723 13:50:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.289 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:52.854 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.854 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:52.854 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:53.111 13:50:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:53.367 13:50:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:54.299 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:54.299 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:54.299 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.299 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:54.556 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.556 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:54.556 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:54.556 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.813 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:54.813 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:54.814 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:54.814 13:50:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:55.071 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.071 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:55.071 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:55.071 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.329 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.329 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:55.329 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.329 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:55.585 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.585 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:55.585 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.585 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:55.843 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.843 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:55.843 13:50:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:56.101 13:50:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:56.359 13:50:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:57.293 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:57.293 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:57.293 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.293 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:57.859 13:50:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.425 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:58.682 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:58.682 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:58.682 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:58.683 13:50:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 108484 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 108484 ']' 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 108484 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108484 00:31:59.260 killing process with pid 108484 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108484' 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 108484 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 108484 00:31:59.260 Connection closed with partial response: 00:31:59.260 00:31:59.260 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 108484 00:31:59.260 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:59.260 [2024-05-15 13:49:35.910058] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:31:59.260 [2024-05-15 13:49:35.910221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108484 ] 00:31:59.260 [2024-05-15 13:49:36.038519] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:59.260 [2024-05-15 13:49:36.058058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:59.260 [2024-05-15 13:49:36.159233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.260 Running I/O for 90 seconds... 00:31:59.260 [2024-05-15 13:49:52.766362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.260 [2024-05-15 13:49:52.766444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.766966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.766981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.767002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.767018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.767041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.767057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.767078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.767093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.767114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.767130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.767152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.767168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.767189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.767205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:59.260 [2024-05-15 13:49:52.767227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.260 [2024-05-15 13:49:52.767242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.767264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.767279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.767300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.767316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.767338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.767363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.767387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.767404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.769509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.769558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.769611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.769656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.769698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.769739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.769782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.769824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.769866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.769908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.769949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.769988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.770006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.261 [2024-05-15 13:49:52.770131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.770968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.770984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.771010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.771026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.771052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.771068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.771094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.261 [2024-05-15 13:49:52.771109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:59.261 [2024-05-15 13:49:52.771136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.771958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.771984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:49:52.772358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:49:52.772374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.329969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:126752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:50:09.330075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:50:09.330129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:126784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:50:09.330166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:50:09.330203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:126816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:50:09.330239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:50:09.330275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:126848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.262 [2024-05-15 13:50:09.330326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.262 [2024-05-15 13:50:09.330378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.262 [2024-05-15 13:50:09.330414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.262 [2024-05-15 13:50:09.330481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.262 [2024-05-15 13:50:09.330516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:59.262 [2024-05-15 13:50:09.330537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.262 [2024-05-15 13:50:09.330552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.330588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.330623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:126856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:126888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:126920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.330938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.330989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.331260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.331275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.332244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.332311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.332347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.332397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.332446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.332647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.263 [2024-05-15 13:50:09.332791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.332931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.332947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.333745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.333772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.333799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.333817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.333839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.263 [2024-05-15 13:50:09.333854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:59.263 [2024-05-15 13:50:09.333876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.333891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.333913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.333928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.333949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.333965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.333985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.334493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:126936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.334826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.334843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.336273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.336305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.336332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.336349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.336372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.336387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.336408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.336424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.336455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.336470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.336492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.264 [2024-05-15 13:50:09.336508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.264 [2024-05-15 13:50:09.336529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.264 [2024-05-15 13:50:09.336556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.336595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.336723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.336796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.336976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.336998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.337325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.337361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.337468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.337523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:126832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.337559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.265 [2024-05-15 13:50:09.337644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.337686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.337708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.337723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.338688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.338724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.338752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.338770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.339741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.339769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.339796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.339812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.339834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.339850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.339871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.339891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.339926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.339943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.339964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.339980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.340001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.265 [2024-05-15 13:50:09.340016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:59.265 [2024-05-15 13:50:09.340037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:126768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.340845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.340868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:126904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.340892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.341741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.341790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.341829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.341865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.341901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.341938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.341973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.341994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.342009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.342030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.342045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.342066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.342081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.342102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.342117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.342138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.342187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.342212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.342228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.342249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.342264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.342286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.342302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.344077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.344104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.344130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.266 [2024-05-15 13:50:09.344146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.344167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.266 [2024-05-15 13:50:09.344183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:59.266 [2024-05-15 13:50:09.344204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.344219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.344265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.344322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.344358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.344394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.344980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.344997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.267 [2024-05-15 13:50:09.345296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.345332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.345368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.345389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.345405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.348714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.348744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.348772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.348790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.348811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.348835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.348856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.348871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.348893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.348908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.348929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.348975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.348994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.349008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.349028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.349042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.349061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.349075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.349095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.267 [2024-05-15 13:50:09.349109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:59.267 [2024-05-15 13:50:09.349129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.349531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.349967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.349982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.350018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.350060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.350096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.350139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.350176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.350213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.350248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.350284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.350320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.350371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.350406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.350441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.350477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.268 [2024-05-15 13:50:09.350492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.351883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.351912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.351938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.351955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.352012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.352028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:59.268 [2024-05-15 13:50:09.352060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.268 [2024-05-15 13:50:09.352076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.352116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.352150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:127984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.352185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.352219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.352253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.352329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.352366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.352403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.352439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.352460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.352476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.372328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.372388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.372446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:127584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.372552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.372604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.372680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.372733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.372765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.372786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.373974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:127744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.374199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:127400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.374375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.374427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.374478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.374590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.269 [2024-05-15 13:50:09.374715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:127864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.269 [2024-05-15 13:50:09.374819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:59.269 [2024-05-15 13:50:09.374849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.374871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.374900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.374933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.374966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.374998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.375049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.375113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.375163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.375214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.375266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.375318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:128160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.375369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.375420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.375451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.375472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.376491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.376551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.376656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.376711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.376763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.376814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.376865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.376916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.376968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.376998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.377019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.377050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.377081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.378983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.270 [2024-05-15 13:50:09.379783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:59.270 [2024-05-15 13:50:09.379931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.270 [2024-05-15 13:50:09.379952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.379993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.380354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.380404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.380506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.380536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.380570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.382338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:127632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.382399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.382453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.382505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.382557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.382630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.382686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.382738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.382790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.382821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.382843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.383553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.383618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.383676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.383723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.383757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.383779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.383810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.383831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.383861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.383883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.383913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.383934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.383965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.383996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.384048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.384099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.384150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:127528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.384203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.384270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.384327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.384379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.384444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.384496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.271 [2024-05-15 13:50:09.384547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:127616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:59.271 [2024-05-15 13:50:09.384598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:59.271 [2024-05-15 13:50:09.384659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.384713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.384774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.384810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.384847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.384882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.384918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.384954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.384969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.385013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.385030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.385051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.385065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.385085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.385100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:59.272 [2024-05-15 13:50:09.385121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:59.272 [2024-05-15 13:50:09.385136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:59.272 Received shutdown signal, test time was about 34.080249 seconds 00:31:59.272 00:31:59.272 Latency(us) 00:31:59.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.272 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:59.272 Verification LBA range: start 0x0 length 0x4000 00:31:59.272 Nvme0n1 : 34.08 8406.59 32.84 0.00 0.00 15196.59 240.17 4026531.84 00:31:59.272 =================================================================================================================== 00:31:59.272 Total : 8406.59 32.84 0.00 0.00 15196.59 240.17 4026531.84 00:31:59.272 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.530 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:59.530 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:31:59.530 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:59.530 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:59.530 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:59.790 rmmod nvme_tcp 00:31:59.790 rmmod nvme_fabrics 00:31:59.790 rmmod nvme_keyring 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 108380 ']' 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 108380 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 108380 ']' 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 108380 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 108380 00:31:59.790 killing process with pid 108380 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:59.790 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:59.791 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 108380' 00:31:59.791 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 108380 00:31:59.791 [2024-05-15 13:50:12.753036] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:59.791 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 108380 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.050 13:50:12 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.050 13:50:13 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:00.050 00:32:00.050 real 0m40.346s 00:32:00.050 user 2m12.257s 00:32:00.050 sys 0m9.716s 00:32:00.050 13:50:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:00.050 13:50:13 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:00.050 ************************************ 00:32:00.050 END TEST nvmf_host_multipath_status 00:32:00.050 ************************************ 00:32:00.050 13:50:13 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:00.050 13:50:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:00.050 13:50:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:00.050 13:50:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:00.050 ************************************ 00:32:00.050 START TEST nvmf_discovery_remove_ifc 00:32:00.050 ************************************ 00:32:00.050 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:00.309 * Looking for test storage... 00:32:00.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:00.309 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:00.309 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:00.309 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:00.310 Cannot find device "nvmf_tgt_br" 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:00.310 Cannot find device "nvmf_tgt_br2" 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:00.310 Cannot find device "nvmf_tgt_br" 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:00.310 Cannot find device "nvmf_tgt_br2" 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:00.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:00.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:00.310 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:00.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:32:00.568 00:32:00.568 --- 10.0.0.2 ping statistics --- 00:32:00.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.568 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:00.568 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:00.568 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:32:00.568 00:32:00.568 --- 10.0.0.3 ping statistics --- 00:32:00.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.568 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:00.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:32:00.568 00:32:00.568 --- 10.0.0.1 ping statistics --- 00:32:00.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.568 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=109783 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 109783 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 109783 ']' 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:00.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:00.568 13:50:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.568 [2024-05-15 13:50:13.594212] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:00.568 [2024-05-15 13:50:13.594307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.826 [2024-05-15 13:50:13.717544] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:00.826 [2024-05-15 13:50:13.739169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.826 [2024-05-15 13:50:13.839762] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.826 [2024-05-15 13:50:13.839832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.826 [2024-05-15 13:50:13.839845] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.826 [2024-05-15 13:50:13.839856] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.826 [2024-05-15 13:50:13.839865] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.826 [2024-05-15 13:50:13.839902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.762 [2024-05-15 13:50:14.599821] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:01.762 [2024-05-15 13:50:14.607775] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:01.762 [2024-05-15 13:50:14.608008] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:01.762 null0 00:32:01.762 [2024-05-15 13:50:14.639925] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=109833 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 109833 /tmp/host.sock 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 109833 ']' 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:01.762 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:01.762 13:50:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.762 [2024-05-15 13:50:14.720436] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:01.762 [2024-05-15 13:50:14.721033] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109833 ] 00:32:01.762 [2024-05-15 13:50:14.843294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:02.020 [2024-05-15 13:50:14.860204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.020 [2024-05-15 13:50:14.962123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.955 13:50:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:03.891 [2024-05-15 13:50:16.863246] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:03.891 [2024-05-15 13:50:16.863291] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:03.891 [2024-05-15 13:50:16.863311] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:03.891 [2024-05-15 13:50:16.949437] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:04.150 [2024-05-15 13:50:17.007269] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:04.150 [2024-05-15 13:50:17.007350] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:04.150 [2024-05-15 13:50:17.007380] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:04.150 [2024-05-15 13:50:17.007401] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:04.150 [2024-05-15 13:50:17.007431] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.150 [2024-05-15 13:50:17.011786] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12ac3c0 was disconnected and freed. delete nvme_qpair. 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:04.150 13:50:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:05.082 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.340 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:05.340 13:50:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:06.275 13:50:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:07.211 13:50:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:08.607 13:50:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:09.543 13:50:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:09.543 [2024-05-15 13:50:22.433737] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:09.543 [2024-05-15 13:50:22.433816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.543 [2024-05-15 13:50:22.433833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.543 [2024-05-15 13:50:22.433846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.543 [2024-05-15 13:50:22.433863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.543 [2024-05-15 13:50:22.433873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.543 [2024-05-15 13:50:22.433882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.543 [2024-05-15 13:50:22.433892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.543 [2024-05-15 13:50:22.433901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.543 [2024-05-15 13:50:22.433911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:09.543 [2024-05-15 13:50:22.433921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:09.543 [2024-05-15 13:50:22.433930] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12885c0 is same with the state(5) to be set 00:32:09.543 [2024-05-15 13:50:22.443730] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12885c0 (9): Bad file descriptor 00:32:09.543 [2024-05-15 13:50:22.453757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:10.477 13:50:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:10.477 13:50:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:10.477 13:50:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.477 13:50:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:10.477 13:50:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:10.477 13:50:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:10.477 13:50:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:10.477 [2024-05-15 13:50:23.516697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:11.854 [2024-05-15 13:50:24.540737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:11.854 [2024-05-15 13:50:24.540879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12885c0 with addr=10.0.0.2, port=4420 00:32:11.854 [2024-05-15 13:50:24.540915] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12885c0 is same with the state(5) to be set 00:32:11.854 [2024-05-15 13:50:24.541832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12885c0 (9): Bad file descriptor 00:32:11.854 [2024-05-15 13:50:24.541920] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:11.855 [2024-05-15 13:50:24.541972] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:11.855 [2024-05-15 13:50:24.542072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.855 [2024-05-15 13:50:24.542101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.855 [2024-05-15 13:50:24.542128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.855 [2024-05-15 13:50:24.542150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.855 [2024-05-15 13:50:24.542173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.855 [2024-05-15 13:50:24.542193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.855 [2024-05-15 13:50:24.542214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.855 [2024-05-15 13:50:24.542234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.855 [2024-05-15 13:50:24.542255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:11.855 [2024-05-15 13:50:24.542276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:11.855 [2024-05-15 13:50:24.542296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:11.855 [2024-05-15 13:50:24.542359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1273860 (9): Bad file descriptor 00:32:11.855 [2024-05-15 13:50:24.543366] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:11.855 [2024-05-15 13:50:24.543422] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:11.855 13:50:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.855 13:50:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:11.855 13:50:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:12.788 13:50:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:13.724 [2024-05-15 13:50:26.554981] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:13.724 [2024-05-15 13:50:26.555018] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:13.724 [2024-05-15 13:50:26.555038] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:13.724 [2024-05-15 13:50:26.641114] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:13.724 [2024-05-15 13:50:26.696464] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:13.724 [2024-05-15 13:50:26.696530] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:13.724 [2024-05-15 13:50:26.696556] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:13.724 [2024-05-15 13:50:26.696573] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:13.724 [2024-05-15 13:50:26.696583] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.724 [2024-05-15 13:50:26.703596] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x12b6d10 was disconnected and freed. delete nvme_qpair. 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 109833 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 109833 ']' 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 109833 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109833 00:32:13.724 killing process with pid 109833 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109833' 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 109833 00:32:13.724 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 109833 00:32:13.982 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:13.982 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:13.982 13:50:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:14.241 rmmod nvme_tcp 00:32:14.241 rmmod nvme_fabrics 00:32:14.241 rmmod nvme_keyring 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 109783 ']' 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 109783 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 109783 ']' 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 109783 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 109783 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 109783' 00:32:14.241 killing process with pid 109783 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 109783 00:32:14.241 [2024-05-15 13:50:27.197454] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:14.241 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 109783 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:14.501 ************************************ 00:32:14.501 END TEST nvmf_discovery_remove_ifc 00:32:14.501 ************************************ 00:32:14.501 00:32:14.501 real 0m14.366s 00:32:14.501 user 0m24.567s 00:32:14.501 sys 0m1.677s 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:14.501 13:50:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.501 13:50:27 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:14.501 13:50:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:14.501 13:50:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:14.501 13:50:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:14.501 ************************************ 00:32:14.501 START TEST nvmf_identify_kernel_target 00:32:14.501 ************************************ 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:14.501 * Looking for test storage... 00:32:14.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.501 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:14.760 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:14.761 Cannot find device "nvmf_tgt_br" 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:14.761 Cannot find device "nvmf_tgt_br2" 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:14.761 Cannot find device "nvmf_tgt_br" 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:14.761 Cannot find device "nvmf_tgt_br2" 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:14.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:14.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:14.761 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:15.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:32:15.020 00:32:15.020 --- 10.0.0.2 ping statistics --- 00:32:15.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.020 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:15.020 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:15.020 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:32:15.020 00:32:15.020 --- 10.0.0.3 ping statistics --- 00:32:15.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.020 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:15.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:32:15.020 00:32:15.020 --- 10.0.0.1 ping statistics --- 00:32:15.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.020 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:15.020 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:15.021 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:15.021 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:15.021 13:50:27 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:15.021 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:15.021 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:15.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:15.279 Waiting for block devices as requested 00:32:15.538 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:15.538 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:15.538 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:32:15.538 No valid GPT data, bailing 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:32:15.797 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:32:15.798 No valid GPT data, bailing 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:32:15.798 No valid GPT data, bailing 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:32:15.798 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:32:15.798 No valid GPT data, bailing 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -a 10.0.0.1 -t tcp -s 4420 00:32:16.058 00:32:16.058 Discovery Log Number of Records 2, Generation counter 2 00:32:16.058 =====Discovery Log Entry 0====== 00:32:16.058 trtype: tcp 00:32:16.058 adrfam: ipv4 00:32:16.058 subtype: current discovery subsystem 00:32:16.058 treq: not specified, sq flow control disable supported 00:32:16.058 portid: 1 00:32:16.058 trsvcid: 4420 00:32:16.058 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:16.058 traddr: 10.0.0.1 00:32:16.058 eflags: none 00:32:16.058 sectype: none 00:32:16.058 =====Discovery Log Entry 1====== 00:32:16.058 trtype: tcp 00:32:16.058 adrfam: ipv4 00:32:16.058 subtype: nvme subsystem 00:32:16.058 treq: not specified, sq flow control disable supported 00:32:16.058 portid: 1 00:32:16.058 trsvcid: 4420 00:32:16.058 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:16.058 traddr: 10.0.0.1 00:32:16.058 eflags: none 00:32:16.058 sectype: none 00:32:16.058 13:50:28 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:16.058 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:16.058 ===================================================== 00:32:16.058 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:16.058 ===================================================== 00:32:16.058 Controller Capabilities/Features 00:32:16.058 ================================ 00:32:16.058 Vendor ID: 0000 00:32:16.058 Subsystem Vendor ID: 0000 00:32:16.058 Serial Number: 6436b26b69439536d52a 00:32:16.058 Model Number: Linux 00:32:16.058 Firmware Version: 6.7.0-68 00:32:16.058 Recommended Arb Burst: 0 00:32:16.058 IEEE OUI Identifier: 00 00 00 00:32:16.058 Multi-path I/O 00:32:16.058 May have multiple subsystem ports: No 00:32:16.058 May have multiple controllers: No 00:32:16.058 Associated with SR-IOV VF: No 00:32:16.058 Max Data Transfer Size: Unlimited 00:32:16.058 Max Number of Namespaces: 0 00:32:16.058 Max Number of I/O Queues: 1024 00:32:16.058 NVMe Specification Version (VS): 1.3 00:32:16.058 NVMe Specification Version (Identify): 1.3 00:32:16.058 Maximum Queue Entries: 1024 00:32:16.058 Contiguous Queues Required: No 00:32:16.058 Arbitration Mechanisms Supported 00:32:16.058 Weighted Round Robin: Not Supported 00:32:16.058 Vendor Specific: Not Supported 00:32:16.058 Reset Timeout: 7500 ms 00:32:16.058 Doorbell Stride: 4 bytes 00:32:16.058 NVM Subsystem Reset: Not Supported 00:32:16.058 Command Sets Supported 00:32:16.058 NVM Command Set: Supported 00:32:16.058 Boot Partition: Not Supported 00:32:16.058 Memory Page Size Minimum: 4096 bytes 00:32:16.058 Memory Page Size Maximum: 4096 bytes 00:32:16.058 Persistent Memory Region: Not Supported 00:32:16.058 Optional Asynchronous Events Supported 00:32:16.058 Namespace Attribute Notices: Not Supported 00:32:16.058 Firmware Activation Notices: Not Supported 00:32:16.058 ANA Change Notices: Not Supported 00:32:16.058 PLE Aggregate Log Change Notices: Not Supported 00:32:16.058 LBA Status Info Alert Notices: Not Supported 00:32:16.058 EGE Aggregate Log Change Notices: Not Supported 00:32:16.058 Normal NVM Subsystem Shutdown event: Not Supported 00:32:16.058 Zone Descriptor Change Notices: Not Supported 00:32:16.058 Discovery Log Change Notices: Supported 00:32:16.058 Controller Attributes 00:32:16.058 128-bit Host Identifier: Not Supported 00:32:16.058 Non-Operational Permissive Mode: Not Supported 00:32:16.058 NVM Sets: Not Supported 00:32:16.058 Read Recovery Levels: Not Supported 00:32:16.058 Endurance Groups: Not Supported 00:32:16.058 Predictable Latency Mode: Not Supported 00:32:16.058 Traffic Based Keep ALive: Not Supported 00:32:16.058 Namespace Granularity: Not Supported 00:32:16.058 SQ Associations: Not Supported 00:32:16.059 UUID List: Not Supported 00:32:16.059 Multi-Domain Subsystem: Not Supported 00:32:16.059 Fixed Capacity Management: Not Supported 00:32:16.059 Variable Capacity Management: Not Supported 00:32:16.059 Delete Endurance Group: Not Supported 00:32:16.059 Delete NVM Set: Not Supported 00:32:16.059 Extended LBA Formats Supported: Not Supported 00:32:16.059 Flexible Data Placement Supported: Not Supported 00:32:16.059 00:32:16.059 Controller Memory Buffer Support 00:32:16.059 ================================ 00:32:16.059 Supported: No 00:32:16.059 00:32:16.059 Persistent Memory Region Support 00:32:16.059 ================================ 00:32:16.059 Supported: No 00:32:16.059 00:32:16.059 Admin Command Set Attributes 00:32:16.059 ============================ 00:32:16.059 Security Send/Receive: Not Supported 00:32:16.059 Format NVM: Not Supported 00:32:16.059 Firmware Activate/Download: Not Supported 00:32:16.059 Namespace Management: Not Supported 00:32:16.059 Device Self-Test: Not Supported 00:32:16.059 Directives: Not Supported 00:32:16.059 NVMe-MI: Not Supported 00:32:16.059 Virtualization Management: Not Supported 00:32:16.059 Doorbell Buffer Config: Not Supported 00:32:16.059 Get LBA Status Capability: Not Supported 00:32:16.059 Command & Feature Lockdown Capability: Not Supported 00:32:16.059 Abort Command Limit: 1 00:32:16.059 Async Event Request Limit: 1 00:32:16.059 Number of Firmware Slots: N/A 00:32:16.059 Firmware Slot 1 Read-Only: N/A 00:32:16.059 Firmware Activation Without Reset: N/A 00:32:16.059 Multiple Update Detection Support: N/A 00:32:16.059 Firmware Update Granularity: No Information Provided 00:32:16.059 Per-Namespace SMART Log: No 00:32:16.059 Asymmetric Namespace Access Log Page: Not Supported 00:32:16.059 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:16.059 Command Effects Log Page: Not Supported 00:32:16.059 Get Log Page Extended Data: Supported 00:32:16.059 Telemetry Log Pages: Not Supported 00:32:16.059 Persistent Event Log Pages: Not Supported 00:32:16.059 Supported Log Pages Log Page: May Support 00:32:16.059 Commands Supported & Effects Log Page: Not Supported 00:32:16.059 Feature Identifiers & Effects Log Page:May Support 00:32:16.059 NVMe-MI Commands & Effects Log Page: May Support 00:32:16.059 Data Area 4 for Telemetry Log: Not Supported 00:32:16.059 Error Log Page Entries Supported: 1 00:32:16.059 Keep Alive: Not Supported 00:32:16.059 00:32:16.059 NVM Command Set Attributes 00:32:16.059 ========================== 00:32:16.059 Submission Queue Entry Size 00:32:16.059 Max: 1 00:32:16.059 Min: 1 00:32:16.059 Completion Queue Entry Size 00:32:16.059 Max: 1 00:32:16.059 Min: 1 00:32:16.059 Number of Namespaces: 0 00:32:16.059 Compare Command: Not Supported 00:32:16.059 Write Uncorrectable Command: Not Supported 00:32:16.059 Dataset Management Command: Not Supported 00:32:16.059 Write Zeroes Command: Not Supported 00:32:16.059 Set Features Save Field: Not Supported 00:32:16.059 Reservations: Not Supported 00:32:16.059 Timestamp: Not Supported 00:32:16.059 Copy: Not Supported 00:32:16.059 Volatile Write Cache: Not Present 00:32:16.059 Atomic Write Unit (Normal): 1 00:32:16.059 Atomic Write Unit (PFail): 1 00:32:16.059 Atomic Compare & Write Unit: 1 00:32:16.059 Fused Compare & Write: Not Supported 00:32:16.059 Scatter-Gather List 00:32:16.059 SGL Command Set: Supported 00:32:16.059 SGL Keyed: Not Supported 00:32:16.059 SGL Bit Bucket Descriptor: Not Supported 00:32:16.059 SGL Metadata Pointer: Not Supported 00:32:16.059 Oversized SGL: Not Supported 00:32:16.059 SGL Metadata Address: Not Supported 00:32:16.059 SGL Offset: Supported 00:32:16.059 Transport SGL Data Block: Not Supported 00:32:16.059 Replay Protected Memory Block: Not Supported 00:32:16.059 00:32:16.059 Firmware Slot Information 00:32:16.059 ========================= 00:32:16.059 Active slot: 0 00:32:16.059 00:32:16.059 00:32:16.059 Error Log 00:32:16.059 ========= 00:32:16.059 00:32:16.059 Active Namespaces 00:32:16.059 ================= 00:32:16.059 Discovery Log Page 00:32:16.059 ================== 00:32:16.059 Generation Counter: 2 00:32:16.059 Number of Records: 2 00:32:16.059 Record Format: 0 00:32:16.059 00:32:16.059 Discovery Log Entry 0 00:32:16.059 ---------------------- 00:32:16.059 Transport Type: 3 (TCP) 00:32:16.059 Address Family: 1 (IPv4) 00:32:16.059 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:16.059 Entry Flags: 00:32:16.059 Duplicate Returned Information: 0 00:32:16.059 Explicit Persistent Connection Support for Discovery: 0 00:32:16.059 Transport Requirements: 00:32:16.059 Secure Channel: Not Specified 00:32:16.059 Port ID: 1 (0x0001) 00:32:16.059 Controller ID: 65535 (0xffff) 00:32:16.059 Admin Max SQ Size: 32 00:32:16.059 Transport Service Identifier: 4420 00:32:16.059 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:16.059 Transport Address: 10.0.0.1 00:32:16.059 Discovery Log Entry 1 00:32:16.059 ---------------------- 00:32:16.059 Transport Type: 3 (TCP) 00:32:16.059 Address Family: 1 (IPv4) 00:32:16.059 Subsystem Type: 2 (NVM Subsystem) 00:32:16.059 Entry Flags: 00:32:16.059 Duplicate Returned Information: 0 00:32:16.059 Explicit Persistent Connection Support for Discovery: 0 00:32:16.059 Transport Requirements: 00:32:16.059 Secure Channel: Not Specified 00:32:16.059 Port ID: 1 (0x0001) 00:32:16.059 Controller ID: 65535 (0xffff) 00:32:16.059 Admin Max SQ Size: 32 00:32:16.059 Transport Service Identifier: 4420 00:32:16.059 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:16.059 Transport Address: 10.0.0.1 00:32:16.319 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:16.319 get_feature(0x01) failed 00:32:16.319 get_feature(0x02) failed 00:32:16.319 get_feature(0x04) failed 00:32:16.319 ===================================================== 00:32:16.319 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:16.319 ===================================================== 00:32:16.319 Controller Capabilities/Features 00:32:16.319 ================================ 00:32:16.319 Vendor ID: 0000 00:32:16.319 Subsystem Vendor ID: 0000 00:32:16.319 Serial Number: d4302889cbb027aa9227 00:32:16.319 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:16.319 Firmware Version: 6.7.0-68 00:32:16.319 Recommended Arb Burst: 6 00:32:16.319 IEEE OUI Identifier: 00 00 00 00:32:16.319 Multi-path I/O 00:32:16.319 May have multiple subsystem ports: Yes 00:32:16.319 May have multiple controllers: Yes 00:32:16.319 Associated with SR-IOV VF: No 00:32:16.319 Max Data Transfer Size: Unlimited 00:32:16.319 Max Number of Namespaces: 1024 00:32:16.319 Max Number of I/O Queues: 128 00:32:16.319 NVMe Specification Version (VS): 1.3 00:32:16.319 NVMe Specification Version (Identify): 1.3 00:32:16.319 Maximum Queue Entries: 1024 00:32:16.319 Contiguous Queues Required: No 00:32:16.319 Arbitration Mechanisms Supported 00:32:16.319 Weighted Round Robin: Not Supported 00:32:16.319 Vendor Specific: Not Supported 00:32:16.319 Reset Timeout: 7500 ms 00:32:16.319 Doorbell Stride: 4 bytes 00:32:16.319 NVM Subsystem Reset: Not Supported 00:32:16.319 Command Sets Supported 00:32:16.319 NVM Command Set: Supported 00:32:16.319 Boot Partition: Not Supported 00:32:16.319 Memory Page Size Minimum: 4096 bytes 00:32:16.319 Memory Page Size Maximum: 4096 bytes 00:32:16.319 Persistent Memory Region: Not Supported 00:32:16.319 Optional Asynchronous Events Supported 00:32:16.319 Namespace Attribute Notices: Supported 00:32:16.319 Firmware Activation Notices: Not Supported 00:32:16.319 ANA Change Notices: Supported 00:32:16.319 PLE Aggregate Log Change Notices: Not Supported 00:32:16.319 LBA Status Info Alert Notices: Not Supported 00:32:16.319 EGE Aggregate Log Change Notices: Not Supported 00:32:16.319 Normal NVM Subsystem Shutdown event: Not Supported 00:32:16.319 Zone Descriptor Change Notices: Not Supported 00:32:16.319 Discovery Log Change Notices: Not Supported 00:32:16.319 Controller Attributes 00:32:16.319 128-bit Host Identifier: Supported 00:32:16.319 Non-Operational Permissive Mode: Not Supported 00:32:16.319 NVM Sets: Not Supported 00:32:16.319 Read Recovery Levels: Not Supported 00:32:16.319 Endurance Groups: Not Supported 00:32:16.319 Predictable Latency Mode: Not Supported 00:32:16.319 Traffic Based Keep ALive: Supported 00:32:16.319 Namespace Granularity: Not Supported 00:32:16.319 SQ Associations: Not Supported 00:32:16.319 UUID List: Not Supported 00:32:16.319 Multi-Domain Subsystem: Not Supported 00:32:16.319 Fixed Capacity Management: Not Supported 00:32:16.319 Variable Capacity Management: Not Supported 00:32:16.319 Delete Endurance Group: Not Supported 00:32:16.319 Delete NVM Set: Not Supported 00:32:16.319 Extended LBA Formats Supported: Not Supported 00:32:16.319 Flexible Data Placement Supported: Not Supported 00:32:16.319 00:32:16.319 Controller Memory Buffer Support 00:32:16.319 ================================ 00:32:16.319 Supported: No 00:32:16.319 00:32:16.319 Persistent Memory Region Support 00:32:16.319 ================================ 00:32:16.319 Supported: No 00:32:16.319 00:32:16.319 Admin Command Set Attributes 00:32:16.319 ============================ 00:32:16.319 Security Send/Receive: Not Supported 00:32:16.319 Format NVM: Not Supported 00:32:16.319 Firmware Activate/Download: Not Supported 00:32:16.319 Namespace Management: Not Supported 00:32:16.319 Device Self-Test: Not Supported 00:32:16.319 Directives: Not Supported 00:32:16.319 NVMe-MI: Not Supported 00:32:16.319 Virtualization Management: Not Supported 00:32:16.319 Doorbell Buffer Config: Not Supported 00:32:16.319 Get LBA Status Capability: Not Supported 00:32:16.319 Command & Feature Lockdown Capability: Not Supported 00:32:16.319 Abort Command Limit: 4 00:32:16.319 Async Event Request Limit: 4 00:32:16.319 Number of Firmware Slots: N/A 00:32:16.319 Firmware Slot 1 Read-Only: N/A 00:32:16.319 Firmware Activation Without Reset: N/A 00:32:16.319 Multiple Update Detection Support: N/A 00:32:16.319 Firmware Update Granularity: No Information Provided 00:32:16.319 Per-Namespace SMART Log: Yes 00:32:16.320 Asymmetric Namespace Access Log Page: Supported 00:32:16.320 ANA Transition Time : 10 sec 00:32:16.320 00:32:16.320 Asymmetric Namespace Access Capabilities 00:32:16.320 ANA Optimized State : Supported 00:32:16.320 ANA Non-Optimized State : Supported 00:32:16.320 ANA Inaccessible State : Supported 00:32:16.320 ANA Persistent Loss State : Supported 00:32:16.320 ANA Change State : Supported 00:32:16.320 ANAGRPID is not changed : No 00:32:16.320 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:16.320 00:32:16.320 ANA Group Identifier Maximum : 128 00:32:16.320 Number of ANA Group Identifiers : 128 00:32:16.320 Max Number of Allowed Namespaces : 1024 00:32:16.320 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:16.320 Command Effects Log Page: Supported 00:32:16.320 Get Log Page Extended Data: Supported 00:32:16.320 Telemetry Log Pages: Not Supported 00:32:16.320 Persistent Event Log Pages: Not Supported 00:32:16.320 Supported Log Pages Log Page: May Support 00:32:16.320 Commands Supported & Effects Log Page: Not Supported 00:32:16.320 Feature Identifiers & Effects Log Page:May Support 00:32:16.320 NVMe-MI Commands & Effects Log Page: May Support 00:32:16.320 Data Area 4 for Telemetry Log: Not Supported 00:32:16.320 Error Log Page Entries Supported: 128 00:32:16.320 Keep Alive: Supported 00:32:16.320 Keep Alive Granularity: 1000 ms 00:32:16.320 00:32:16.320 NVM Command Set Attributes 00:32:16.320 ========================== 00:32:16.320 Submission Queue Entry Size 00:32:16.320 Max: 64 00:32:16.320 Min: 64 00:32:16.320 Completion Queue Entry Size 00:32:16.320 Max: 16 00:32:16.320 Min: 16 00:32:16.320 Number of Namespaces: 1024 00:32:16.320 Compare Command: Not Supported 00:32:16.320 Write Uncorrectable Command: Not Supported 00:32:16.320 Dataset Management Command: Supported 00:32:16.320 Write Zeroes Command: Supported 00:32:16.320 Set Features Save Field: Not Supported 00:32:16.320 Reservations: Not Supported 00:32:16.320 Timestamp: Not Supported 00:32:16.320 Copy: Not Supported 00:32:16.320 Volatile Write Cache: Present 00:32:16.320 Atomic Write Unit (Normal): 1 00:32:16.320 Atomic Write Unit (PFail): 1 00:32:16.320 Atomic Compare & Write Unit: 1 00:32:16.320 Fused Compare & Write: Not Supported 00:32:16.320 Scatter-Gather List 00:32:16.320 SGL Command Set: Supported 00:32:16.320 SGL Keyed: Not Supported 00:32:16.320 SGL Bit Bucket Descriptor: Not Supported 00:32:16.320 SGL Metadata Pointer: Not Supported 00:32:16.320 Oversized SGL: Not Supported 00:32:16.320 SGL Metadata Address: Not Supported 00:32:16.320 SGL Offset: Supported 00:32:16.320 Transport SGL Data Block: Not Supported 00:32:16.320 Replay Protected Memory Block: Not Supported 00:32:16.320 00:32:16.320 Firmware Slot Information 00:32:16.320 ========================= 00:32:16.320 Active slot: 0 00:32:16.320 00:32:16.320 Asymmetric Namespace Access 00:32:16.320 =========================== 00:32:16.320 Change Count : 0 00:32:16.320 Number of ANA Group Descriptors : 1 00:32:16.320 ANA Group Descriptor : 0 00:32:16.320 ANA Group ID : 1 00:32:16.320 Number of NSID Values : 1 00:32:16.320 Change Count : 0 00:32:16.320 ANA State : 1 00:32:16.320 Namespace Identifier : 1 00:32:16.320 00:32:16.320 Commands Supported and Effects 00:32:16.320 ============================== 00:32:16.320 Admin Commands 00:32:16.320 -------------- 00:32:16.320 Get Log Page (02h): Supported 00:32:16.320 Identify (06h): Supported 00:32:16.320 Abort (08h): Supported 00:32:16.320 Set Features (09h): Supported 00:32:16.320 Get Features (0Ah): Supported 00:32:16.320 Asynchronous Event Request (0Ch): Supported 00:32:16.320 Keep Alive (18h): Supported 00:32:16.320 I/O Commands 00:32:16.320 ------------ 00:32:16.320 Flush (00h): Supported 00:32:16.320 Write (01h): Supported LBA-Change 00:32:16.320 Read (02h): Supported 00:32:16.320 Write Zeroes (08h): Supported LBA-Change 00:32:16.320 Dataset Management (09h): Supported 00:32:16.320 00:32:16.320 Error Log 00:32:16.320 ========= 00:32:16.320 Entry: 0 00:32:16.320 Error Count: 0x3 00:32:16.320 Submission Queue Id: 0x0 00:32:16.320 Command Id: 0x5 00:32:16.320 Phase Bit: 0 00:32:16.320 Status Code: 0x2 00:32:16.320 Status Code Type: 0x0 00:32:16.320 Do Not Retry: 1 00:32:16.320 Error Location: 0x28 00:32:16.320 LBA: 0x0 00:32:16.320 Namespace: 0x0 00:32:16.320 Vendor Log Page: 0x0 00:32:16.320 ----------- 00:32:16.320 Entry: 1 00:32:16.320 Error Count: 0x2 00:32:16.320 Submission Queue Id: 0x0 00:32:16.320 Command Id: 0x5 00:32:16.320 Phase Bit: 0 00:32:16.320 Status Code: 0x2 00:32:16.320 Status Code Type: 0x0 00:32:16.320 Do Not Retry: 1 00:32:16.320 Error Location: 0x28 00:32:16.320 LBA: 0x0 00:32:16.320 Namespace: 0x0 00:32:16.320 Vendor Log Page: 0x0 00:32:16.320 ----------- 00:32:16.320 Entry: 2 00:32:16.320 Error Count: 0x1 00:32:16.320 Submission Queue Id: 0x0 00:32:16.320 Command Id: 0x4 00:32:16.320 Phase Bit: 0 00:32:16.320 Status Code: 0x2 00:32:16.320 Status Code Type: 0x0 00:32:16.320 Do Not Retry: 1 00:32:16.320 Error Location: 0x28 00:32:16.320 LBA: 0x0 00:32:16.320 Namespace: 0x0 00:32:16.320 Vendor Log Page: 0x0 00:32:16.320 00:32:16.320 Number of Queues 00:32:16.320 ================ 00:32:16.320 Number of I/O Submission Queues: 128 00:32:16.320 Number of I/O Completion Queues: 128 00:32:16.320 00:32:16.320 ZNS Specific Controller Data 00:32:16.320 ============================ 00:32:16.320 Zone Append Size Limit: 0 00:32:16.320 00:32:16.320 00:32:16.320 Active Namespaces 00:32:16.320 ================= 00:32:16.320 get_feature(0x05) failed 00:32:16.320 Namespace ID:1 00:32:16.320 Command Set Identifier: NVM (00h) 00:32:16.320 Deallocate: Supported 00:32:16.320 Deallocated/Unwritten Error: Not Supported 00:32:16.320 Deallocated Read Value: Unknown 00:32:16.320 Deallocate in Write Zeroes: Not Supported 00:32:16.320 Deallocated Guard Field: 0xFFFF 00:32:16.320 Flush: Supported 00:32:16.320 Reservation: Not Supported 00:32:16.320 Namespace Sharing Capabilities: Multiple Controllers 00:32:16.320 Size (in LBAs): 1310720 (5GiB) 00:32:16.320 Capacity (in LBAs): 1310720 (5GiB) 00:32:16.320 Utilization (in LBAs): 1310720 (5GiB) 00:32:16.320 UUID: ed6f5dbc-a498-47ec-862b-f768c2ef3305 00:32:16.320 Thin Provisioning: Not Supported 00:32:16.320 Per-NS Atomic Units: Yes 00:32:16.320 Atomic Boundary Size (Normal): 0 00:32:16.320 Atomic Boundary Size (PFail): 0 00:32:16.320 Atomic Boundary Offset: 0 00:32:16.320 NGUID/EUI64 Never Reused: No 00:32:16.320 ANA group ID: 1 00:32:16.320 Namespace Write Protected: No 00:32:16.320 Number of LBA Formats: 1 00:32:16.320 Current LBA Format: LBA Format #00 00:32:16.320 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:32:16.320 00:32:16.320 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:16.320 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:16.320 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:16.320 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:16.320 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:16.320 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:16.320 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:16.320 rmmod nvme_tcp 00:32:16.320 rmmod nvme_fabrics 00:32:16.578 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:16.578 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:16.578 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:16.578 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:16.578 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:16.579 13:50:29 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:17.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:17.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:17.414 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:17.414 00:32:17.414 real 0m2.874s 00:32:17.414 user 0m0.982s 00:32:17.414 sys 0m1.347s 00:32:17.414 13:50:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:17.414 ************************************ 00:32:17.414 13:50:30 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:17.414 END TEST nvmf_identify_kernel_target 00:32:17.414 ************************************ 00:32:17.414 13:50:30 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:17.414 13:50:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:17.414 13:50:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:17.414 13:50:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:17.414 ************************************ 00:32:17.414 START TEST nvmf_auth_host 00:32:17.414 ************************************ 00:32:17.414 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:17.414 * Looking for test storage... 00:32:17.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:17.414 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:17.414 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.673 13:50:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:17.674 Cannot find device "nvmf_tgt_br" 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:17.674 Cannot find device "nvmf_tgt_br2" 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:17.674 Cannot find device "nvmf_tgt_br" 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:17.674 Cannot find device "nvmf_tgt_br2" 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:17.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:17.674 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:17.674 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:17.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:32:17.933 00:32:17.933 --- 10.0.0.2 ping statistics --- 00:32:17.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.933 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:17.933 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:17.933 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:32:17.933 00:32:17.933 --- 10.0.0.3 ping statistics --- 00:32:17.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.933 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:17.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:32:17.933 00:32:17.933 --- 10.0.0.1 ping statistics --- 00:32:17.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.933 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=110725 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 110725 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 110725 ']' 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:17.933 13:50:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.310 13:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:19.310 13:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:19.310 13:50:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:19.310 13:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:19.310 13:50:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c0547c01071ac8ba520ab304810ddbd5 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ynl 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c0547c01071ac8ba520ab304810ddbd5 0 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c0547c01071ac8ba520ab304810ddbd5 0 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c0547c01071ac8ba520ab304810ddbd5 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ynl 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ynl 00:32:19.310 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Ynl 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c0f51fb915807d1e43c368cc805075d6b56ab32d16bb42fe712e21e7c16a038a 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.uP0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c0f51fb915807d1e43c368cc805075d6b56ab32d16bb42fe712e21e7c16a038a 3 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c0f51fb915807d1e43c368cc805075d6b56ab32d16bb42fe712e21e7c16a038a 3 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c0f51fb915807d1e43c368cc805075d6b56ab32d16bb42fe712e21e7c16a038a 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.uP0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.uP0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.uP0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8341f11be03319a3af6e939d628dbfbf5387eea8ebc12e01 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9Ke 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8341f11be03319a3af6e939d628dbfbf5387eea8ebc12e01 0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8341f11be03319a3af6e939d628dbfbf5387eea8ebc12e01 0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8341f11be03319a3af6e939d628dbfbf5387eea8ebc12e01 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9Ke 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9Ke 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9Ke 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=132dd2374faa98f2eded5e3fb1177619754d93b1ae3fef6e 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Amq 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 132dd2374faa98f2eded5e3fb1177619754d93b1ae3fef6e 2 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 132dd2374faa98f2eded5e3fb1177619754d93b1ae3fef6e 2 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=132dd2374faa98f2eded5e3fb1177619754d93b1ae3fef6e 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Amq 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Amq 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Amq 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b81043a90aefb63cf455554af2a06294 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.16B 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b81043a90aefb63cf455554af2a06294 1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b81043a90aefb63cf455554af2a06294 1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b81043a90aefb63cf455554af2a06294 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.16B 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.16B 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.16B 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b95f7a4fb4c646129600fbd3a8df62d0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.5H7 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b95f7a4fb4c646129600fbd3a8df62d0 1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b95f7a4fb4c646129600fbd3a8df62d0 1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b95f7a4fb4c646129600fbd3a8df62d0 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.5H7 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.5H7 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.5H7 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=35966da3bb4b727f76184347f346515949a2b7223ac3f07f 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wkD 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 35966da3bb4b727f76184347f346515949a2b7223ac3f07f 2 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 35966da3bb4b727f76184347f346515949a2b7223ac3f07f 2 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=35966da3bb4b727f76184347f346515949a2b7223ac3f07f 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:19.311 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wkD 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wkD 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wkD 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b810d9d2f5a0a89fdef16e6149e2488 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Yca 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b810d9d2f5a0a89fdef16e6149e2488 0 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b810d9d2f5a0a89fdef16e6149e2488 0 00:32:19.570 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b810d9d2f5a0a89fdef16e6149e2488 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Yca 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Yca 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Yca 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1567e4679412a423383f5817740b11bd49a067938fd9715aeff9ac46f5975450 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yJ8 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1567e4679412a423383f5817740b11bd49a067938fd9715aeff9ac46f5975450 3 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1567e4679412a423383f5817740b11bd49a067938fd9715aeff9ac46f5975450 3 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1567e4679412a423383f5817740b11bd49a067938fd9715aeff9ac46f5975450 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yJ8 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yJ8 00:32:19.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yJ8 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 110725 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 110725 ']' 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:19.571 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ynl 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.uP0 ]] 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uP0 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.829 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9Ke 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Amq ]] 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Amq 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.16B 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.830 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.5H7 ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5H7 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wkD 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Yca ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Yca 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yJ8 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:20.089 13:50:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:20.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:20.348 Waiting for block devices as requested 00:32:20.348 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:20.605 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:32:21.171 No valid GPT data, bailing 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:32:21.171 No valid GPT data, bailing 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:32:21.171 No valid GPT data, bailing 00:32:21.171 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:32:21.429 No valid GPT data, bailing 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:21.429 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -a 10.0.0.1 -t tcp -s 4420 00:32:21.429 00:32:21.429 Discovery Log Number of Records 2, Generation counter 2 00:32:21.429 =====Discovery Log Entry 0====== 00:32:21.430 trtype: tcp 00:32:21.430 adrfam: ipv4 00:32:21.430 subtype: current discovery subsystem 00:32:21.430 treq: not specified, sq flow control disable supported 00:32:21.430 portid: 1 00:32:21.430 trsvcid: 4420 00:32:21.430 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:21.430 traddr: 10.0.0.1 00:32:21.430 eflags: none 00:32:21.430 sectype: none 00:32:21.430 =====Discovery Log Entry 1====== 00:32:21.430 trtype: tcp 00:32:21.430 adrfam: ipv4 00:32:21.430 subtype: nvme subsystem 00:32:21.430 treq: not specified, sq flow control disable supported 00:32:21.430 portid: 1 00:32:21.430 trsvcid: 4420 00:32:21.430 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:21.430 traddr: 10.0.0.1 00:32:21.430 eflags: none 00:32:21.430 sectype: none 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.430 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.689 nvme0n1 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.689 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.690 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.690 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.690 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.690 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.690 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:21.690 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.690 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.948 nvme0n1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.948 nvme0n1 00:32:21.948 13:50:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.948 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.948 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.948 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.949 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.949 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 nvme0n1 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.207 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.467 nvme0n1 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.467 nvme0n1 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.467 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.726 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.984 13:50:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.242 nvme0n1 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.242 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.243 nvme0n1 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.243 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.501 nvme0n1 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.501 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.759 nvme0n1 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.759 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.016 nvme0n1 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.016 13:50:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.578 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.835 nvme0n1 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.835 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.836 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.836 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.836 13:50:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.836 13:50:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.836 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.836 13:50:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.091 nvme0n1 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.091 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.092 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.347 nvme0n1 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.347 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.676 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.676 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.676 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.677 nvme0n1 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.677 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.948 nvme0n1 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.948 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.949 13:50:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.844 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.845 13:50:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.102 nvme0n1 00:32:28.102 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.102 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.102 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.102 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.102 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.103 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 nvme0n1 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.667 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.924 nvme0n1 00:32:28.924 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.924 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.924 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.924 13:50:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.924 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.924 13:50:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.924 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.924 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.924 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.924 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.182 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.440 nvme0n1 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.440 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.032 nvme0n1 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.032 13:50:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.598 nvme0n1 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.598 13:50:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.533 nvme0n1 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.533 13:50:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.534 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:31.534 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.534 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.101 nvme0n1 00:32:32.101 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.101 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.101 13:50:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.101 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.101 13:50:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.101 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.102 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.670 nvme0n1 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.670 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.929 13:50:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.497 nvme0n1 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:33.497 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.498 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 nvme0n1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 nvme0n1 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.762 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.044 nvme0n1 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.044 13:50:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:34.044 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.045 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.303 nvme0n1 00:32:34.303 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.303 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.303 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.303 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.303 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.304 nvme0n1 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.304 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.563 nvme0n1 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.563 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.564 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.822 nvme0n1 00:32:34.822 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.822 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.823 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 nvme0n1 00:32:35.082 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.082 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.082 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.082 13:50:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.082 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 13:50:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 nvme0n1 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.082 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.341 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.342 nvme0n1 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.342 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.601 nvme0n1 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.601 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.861 nvme0n1 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.861 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.121 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.121 13:50:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.121 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.121 13:50:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.121 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.381 nvme0n1 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.381 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.640 nvme0n1 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.640 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.899 nvme0n1 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.899 13:50:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.246 nvme0n1 00:32:37.246 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.246 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.246 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.246 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.246 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.246 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.541 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.801 nvme0n1 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.801 13:50:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.060 nvme0n1 00:32:38.060 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.060 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.060 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.060 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.060 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.060 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.320 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.321 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.579 nvme0n1 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.579 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.580 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.148 nvme0n1 00:32:39.148 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.148 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.148 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.148 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.148 13:50:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.148 13:50:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.148 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.149 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.716 nvme0n1 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:39.716 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.717 13:50:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.289 nvme0n1 00:32:40.289 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.289 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.289 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.289 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.289 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.547 13:50:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.115 nvme0n1 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.115 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.683 nvme0n1 00:32:41.683 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.683 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.683 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.683 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.683 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.683 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.942 13:50:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.511 nvme0n1 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.511 nvme0n1 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.511 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.770 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.771 nvme0n1 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.771 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.030 nvme0n1 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.030 13:50:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:43.030 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.031 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.291 nvme0n1 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.291 nvme0n1 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.291 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.551 nvme0n1 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.551 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.552 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.811 nvme0n1 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.811 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.069 nvme0n1 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.069 13:50:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 nvme0n1 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.070 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.329 nvme0n1 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.329 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:44.330 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.588 nvme0n1 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:44.588 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.589 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.848 nvme0n1 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:44.848 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.106 13:50:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.106 nvme0n1 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.106 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.365 nvme0n1 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.365 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.624 nvme0n1 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.624 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.883 13:50:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 nvme0n1 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.708 nvme0n1 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.708 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.966 nvme0n1 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.966 13:50:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.966 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.534 nvme0n1 00:32:47.534 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.534 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.534 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.534 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.534 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.534 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.534 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.535 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.794 nvme0n1 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzA1NDdjMDEwNzFhYzhiYTUyMGFiMzA0ODEwZGRiZDW2Acg0: 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzBmNTFmYjkxNTgwN2QxZTQzYzM2OGNjODA1MDc1ZDZiNTZhYjMyZDE2YmI0MmZlNzEyZTIxZTdjMTZhMDM4Yb9R4aI=: 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.794 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.053 13:51:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.621 nvme0n1 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.621 13:51:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.188 nvme0n1 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.188 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgxMDQzYTkwYWVmYjYzY2Y0NTU1NTRhZjJhMDYyOTRk4ay/: 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: ]] 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Yjk1ZjdhNGZiNGM2NDYxMjk2MDBmYmQzYThkZjYyZDDyLAwi: 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.189 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 nvme0n1 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzU5NjZkYTNiYjRiNzI3Zjc2MTg0MzQ3ZjM0NjUxNTk0OWEyYjcyMjNhYzNmMDdmb+n4nQ==: 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: ]] 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OWI4MTBkOWQyZjVhMGE4OWZkZWYxNmU2MTQ5ZTI0ODhs6VfI: 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.126 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.127 13:51:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.696 nvme0n1 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTU2N2U0Njc5NDEyYTQyMzM4M2Y1ODE3NzQwYjExYmQ0OWEwNjc5MzhmZDk3MTVhZWZmOWFjNDZmNTk3NTQ1MMZH3ZY=: 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.696 13:51:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.263 nvme0n1 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:51.263 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODM0MWYxMWJlMDMzMTlhM2FmNmU5MzlkNjI4ZGJmYmY1Mzg3ZWVhOGViYzEyZTAxL15KPQ==: 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: ]] 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MTMyZGQyMzc0ZmFhOThmMmVkZWQ1ZTNmYjExNzc2MTk3NTRkOTNiMWFlM2ZlZjZlCWeyNQ==: 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.264 2024/05/15 13:51:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:32:51.264 request: 00:32:51.264 { 00:32:51.264 "method": "bdev_nvme_attach_controller", 00:32:51.264 "params": { 00:32:51.264 "name": "nvme0", 00:32:51.264 "trtype": "tcp", 00:32:51.264 "traddr": "10.0.0.1", 00:32:51.264 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:51.264 "adrfam": "ipv4", 00:32:51.264 "trsvcid": "4420", 00:32:51.264 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:32:51.264 } 00:32:51.264 } 00:32:51.264 Got JSON-RPC error response 00:32:51.264 GoRPCClient: error on JSON-RPC call 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.264 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.523 2024/05/15 13:51:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:32:51.523 request: 00:32:51.523 { 00:32:51.523 "method": "bdev_nvme_attach_controller", 00:32:51.523 "params": { 00:32:51.523 "name": "nvme0", 00:32:51.523 "trtype": "tcp", 00:32:51.523 "traddr": "10.0.0.1", 00:32:51.523 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:51.523 "adrfam": "ipv4", 00:32:51.523 "trsvcid": "4420", 00:32:51.523 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:51.523 "dhchap_key": "key2" 00:32:51.523 } 00:32:51.523 } 00:32:51.523 Got JSON-RPC error response 00:32:51.523 GoRPCClient: error on JSON-RPC call 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.523 2024/05/15 13:51:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:32:51.523 request: 00:32:51.523 { 00:32:51.523 "method": "bdev_nvme_attach_controller", 00:32:51.523 "params": { 00:32:51.523 "name": "nvme0", 00:32:51.523 "trtype": "tcp", 00:32:51.523 "traddr": "10.0.0.1", 00:32:51.523 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:51.523 "adrfam": "ipv4", 00:32:51.523 "trsvcid": "4420", 00:32:51.523 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:51.523 "dhchap_key": "key1", 00:32:51.523 "dhchap_ctrlr_key": "ckey2" 00:32:51.523 } 00:32:51.523 } 00:32:51.523 Got JSON-RPC error response 00:32:51.523 GoRPCClient: error on JSON-RPC call 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:51.523 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:52.091 rmmod nvme_tcp 00:32:52.091 rmmod nvme_fabrics 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 110725 ']' 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 110725 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 110725 ']' 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 110725 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 110725 00:32:52.091 killing process with pid 110725 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 110725' 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 110725 00:32:52.091 13:51:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 110725 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:52.091 13:51:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:52.350 13:51:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:52.917 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:53.175 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:53.175 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:53.175 13:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Ynl /tmp/spdk.key-null.9Ke /tmp/spdk.key-sha256.16B /tmp/spdk.key-sha384.wkD /tmp/spdk.key-sha512.yJ8 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:32:53.175 13:51:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:53.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:53.433 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:53.433 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:32:53.433 00:32:53.433 real 0m36.082s 00:32:53.433 user 0m32.262s 00:32:53.433 sys 0m3.775s 00:32:53.433 13:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:53.433 13:51:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.433 ************************************ 00:32:53.433 END TEST nvmf_auth_host 00:32:53.433 ************************************ 00:32:53.692 13:51:06 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:32:53.692 13:51:06 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:53.692 13:51:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:32:53.692 13:51:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:53.692 13:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:53.692 ************************************ 00:32:53.692 START TEST nvmf_digest 00:32:53.692 ************************************ 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:53.692 * Looking for test storage... 00:32:53.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:53.692 13:51:06 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:53.693 Cannot find device "nvmf_tgt_br" 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:53.693 Cannot find device "nvmf_tgt_br2" 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:53.693 Cannot find device "nvmf_tgt_br" 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:53.693 Cannot find device "nvmf_tgt_br2" 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:32:53.693 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:53.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:53.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:53.952 13:51:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:53.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:53.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:32:53.952 00:32:53.952 --- 10.0.0.2 ping statistics --- 00:32:53.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.952 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:53.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:53.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:32:53.952 00:32:53.952 --- 10.0.0.3 ping statistics --- 00:32:53.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.952 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:53.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:53.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:32:53.952 00:32:53.952 --- 10.0.0.1 ping statistics --- 00:32:53.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:53.952 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:53.952 ************************************ 00:32:53.952 START TEST nvmf_digest_clean 00:32:53.952 ************************************ 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:53.952 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.211 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=112341 00:32:54.211 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 112341 00:32:54.211 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 112341 ']' 00:32:54.211 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.211 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:54.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.211 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.211 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:54.212 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.212 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:54.212 [2024-05-15 13:51:07.111321] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:54.212 [2024-05-15 13:51:07.111443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.212 [2024-05-15 13:51:07.235821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:54.212 [2024-05-15 13:51:07.256633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.471 [2024-05-15 13:51:07.345257] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.471 [2024-05-15 13:51:07.345331] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.471 [2024-05-15 13:51:07.345345] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.471 [2024-05-15 13:51:07.345367] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.471 [2024-05-15 13:51:07.345377] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.471 [2024-05-15 13:51:07.345406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.471 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.471 null0 00:32:54.471 [2024-05-15 13:51:07.548679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:54.730 [2024-05-15 13:51:07.572609] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:54.730 [2024-05-15 13:51:07.572911] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.730 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112382 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112382 /var/tmp/bperf.sock 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 112382 ']' 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:54.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:54.731 13:51:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.731 [2024-05-15 13:51:07.634830] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:54.731 [2024-05-15 13:51:07.634935] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112382 ] 00:32:54.731 [2024-05-15 13:51:07.758094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:54.731 [2024-05-15 13:51:07.777372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.990 [2024-05-15 13:51:07.878468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.558 13:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:55.558 13:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:32:55.558 13:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:55.558 13:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:55.558 13:51:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:56.125 13:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.125 13:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:56.384 nvme0n1 00:32:56.384 13:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:56.384 13:51:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:56.384 Running I/O for 2 seconds... 00:32:58.918 00:32:58.918 Latency(us) 00:32:58.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.918 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:58.918 nvme0n1 : 2.00 19272.21 75.28 0.00 0.00 6632.95 3693.85 12034.79 00:32:58.918 =================================================================================================================== 00:32:58.918 Total : 19272.21 75.28 0.00 0.00 6632.95 3693.85 12034.79 00:32:58.918 0 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:58.918 | select(.opcode=="crc32c") 00:32:58.918 | "\(.module_name) \(.executed)"' 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112382 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 112382 ']' 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 112382 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112382 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:58.918 killing process with pid 112382 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112382' 00:32:58.918 Received shutdown signal, test time was about 2.000000 seconds 00:32:58.918 00:32:58.918 Latency(us) 00:32:58.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.918 =================================================================================================================== 00:32:58.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 112382 00:32:58.918 13:51:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 112382 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112469 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112469 /var/tmp/bperf.sock 00:32:58.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 112469 ']' 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:58.919 13:51:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:59.177 [2024-05-15 13:51:12.050110] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:32:59.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:59.178 Zero copy mechanism will not be used. 00:32:59.178 [2024-05-15 13:51:12.050956] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112469 ] 00:32:59.178 [2024-05-15 13:51:12.170494] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:59.178 [2024-05-15 13:51:12.182071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.435 [2024-05-15 13:51:12.282757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.002 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:00.002 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:00.002 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:00.002 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:00.002 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:00.567 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.567 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.567 nvme0n1 00:33:00.824 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:00.824 13:51:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:00.824 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.824 Zero copy mechanism will not be used. 00:33:00.824 Running I/O for 2 seconds... 00:33:02.749 00:33:02.749 Latency(us) 00:33:02.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.749 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:02.749 nvme0n1 : 2.00 8024.98 1003.12 0.00 0.00 1989.73 614.40 3842.79 00:33:02.749 =================================================================================================================== 00:33:02.749 Total : 8024.98 1003.12 0.00 0.00 1989.73 614.40 3842.79 00:33:02.749 0 00:33:02.749 13:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:02.749 13:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:02.749 13:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:02.749 | select(.opcode=="crc32c") 00:33:02.749 | "\(.module_name) \(.executed)"' 00:33:02.749 13:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:02.749 13:51:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112469 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 112469 ']' 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 112469 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112469 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112469' 00:33:03.316 killing process with pid 112469 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 112469 00:33:03.316 Received shutdown signal, test time was about 2.000000 seconds 00:33:03.316 00:33:03.316 Latency(us) 00:33:03.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.316 =================================================================================================================== 00:33:03.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 112469 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112560 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112560 /var/tmp/bperf.sock 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 112560 ']' 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:03.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:03.316 13:51:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.574 [2024-05-15 13:51:16.447035] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:03.574 [2024-05-15 13:51:16.447141] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112560 ] 00:33:03.574 [2024-05-15 13:51:16.566466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:03.574 [2024-05-15 13:51:16.578269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.832 [2024-05-15 13:51:16.676958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.400 13:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:04.400 13:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:04.400 13:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:04.400 13:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:04.400 13:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:04.968 13:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.968 13:51:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.227 nvme0n1 00:33:05.227 13:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:05.227 13:51:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.227 Running I/O for 2 seconds... 00:33:07.760 00:33:07.760 Latency(us) 00:33:07.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.760 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.760 nvme0n1 : 2.00 22833.41 89.19 0.00 0.00 5599.63 2189.50 13583.83 00:33:07.760 =================================================================================================================== 00:33:07.760 Total : 22833.41 89.19 0.00 0.00 5599.63 2189.50 13583.83 00:33:07.760 0 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:07.760 | select(.opcode=="crc32c") 00:33:07.760 | "\(.module_name) \(.executed)"' 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112560 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 112560 ']' 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 112560 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112560 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112560' 00:33:07.760 killing process with pid 112560 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 112560 00:33:07.760 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.760 00:33:07.760 Latency(us) 00:33:07.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.760 =================================================================================================================== 00:33:07.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 112560 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=112651 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 112651 /var/tmp/bperf.sock 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 112651 ']' 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:07.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:07.760 13:51:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.760 [2024-05-15 13:51:20.852920] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:07.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:07.760 Zero copy mechanism will not be used. 00:33:07.761 [2024-05-15 13:51:20.853693] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112651 ] 00:33:08.020 [2024-05-15 13:51:20.978513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:08.020 [2024-05-15 13:51:20.990167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.020 [2024-05-15 13:51:21.087072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:08.956 13:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:08.956 13:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:33:08.956 13:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:08.956 13:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:08.957 13:51:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:09.215 13:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.215 13:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:09.473 nvme0n1 00:33:09.473 13:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:09.473 13:51:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:09.732 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:09.732 Zero copy mechanism will not be used. 00:33:09.732 Running I/O for 2 seconds... 00:33:11.646 00:33:11.646 Latency(us) 00:33:11.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.646 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:11.646 nvme0n1 : 2.00 6858.74 857.34 0.00 0.00 2326.74 1854.37 8519.68 00:33:11.646 =================================================================================================================== 00:33:11.646 Total : 6858.74 857.34 0.00 0.00 2326.74 1854.37 8519.68 00:33:11.646 0 00:33:11.646 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:11.646 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:11.646 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:11.646 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:11.646 | select(.opcode=="crc32c") 00:33:11.646 | "\(.module_name) \(.executed)"' 00:33:11.646 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 112651 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 112651 ']' 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 112651 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112651 00:33:11.906 killing process with pid 112651 00:33:11.906 Received shutdown signal, test time was about 2.000000 seconds 00:33:11.906 00:33:11.906 Latency(us) 00:33:11.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:11.906 =================================================================================================================== 00:33:11.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112651' 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 112651 00:33:11.906 13:51:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 112651 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 112341 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 112341 ']' 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 112341 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112341 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:12.167 killing process with pid 112341 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112341' 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 112341 00:33:12.167 [2024-05-15 13:51:25.222961] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:12.167 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 112341 00:33:12.426 00:33:12.426 real 0m18.394s 00:33:12.426 user 0m35.876s 00:33:12.426 sys 0m4.689s 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:12.426 ************************************ 00:33:12.426 END TEST nvmf_digest_clean 00:33:12.426 ************************************ 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:12.426 ************************************ 00:33:12.426 START TEST nvmf_digest_error 00:33:12.426 ************************************ 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=112764 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 112764 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112764 ']' 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:12.426 13:51:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:12.686 [2024-05-15 13:51:25.545430] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:12.686 [2024-05-15 13:51:25.545531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.686 [2024-05-15 13:51:25.664672] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:12.686 [2024-05-15 13:51:25.684905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.960 [2024-05-15 13:51:25.786711] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.960 [2024-05-15 13:51:25.786769] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.960 [2024-05-15 13:51:25.786783] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.960 [2024-05-15 13:51:25.786794] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.960 [2024-05-15 13:51:25.786803] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.960 [2024-05-15 13:51:25.786832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.528 [2024-05-15 13:51:26.543421] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.528 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.786 null0 00:33:13.786 [2024-05-15 13:51:26.660622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:13.786 [2024-05-15 13:51:26.684553] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:13.786 [2024-05-15 13:51:26.684842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112809 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112809 /var/tmp/bperf.sock 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112809 ']' 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:13.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:13.786 13:51:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:13.786 [2024-05-15 13:51:26.740348] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:13.786 [2024-05-15 13:51:26.740438] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112809 ] 00:33:13.786 [2024-05-15 13:51:26.858899] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:13.786 [2024-05-15 13:51:26.874950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.044 [2024-05-15 13:51:26.969269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.978 13:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:14.978 13:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:14.978 13:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:14.978 13:51:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:14.978 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:14.978 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.978 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:14.978 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.978 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:14.978 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.237 nvme0n1 00:33:15.237 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:15.237 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.237 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:15.237 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.237 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:15.237 13:51:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:15.496 Running I/O for 2 seconds... 00:33:15.496 [2024-05-15 13:51:28.468260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.468354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.468370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.482260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.482315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.482344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.495858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.495913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.495942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.508379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.508420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.508435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.523247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.523307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.523337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.536298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.536374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.536389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.550957] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.551001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.551017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.562709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.562769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.562784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.577470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.577533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.577548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.496 [2024-05-15 13:51:28.591814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.496 [2024-05-15 13:51:28.591870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.496 [2024-05-15 13:51:28.591884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.755 [2024-05-15 13:51:28.605482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.755 [2024-05-15 13:51:28.605522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.755 [2024-05-15 13:51:28.605536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.755 [2024-05-15 13:51:28.619113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.755 [2024-05-15 13:51:28.619171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.755 [2024-05-15 13:51:28.619200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.755 [2024-05-15 13:51:28.632506] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.755 [2024-05-15 13:51:28.632571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.755 [2024-05-15 13:51:28.632585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.755 [2024-05-15 13:51:28.645633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.755 [2024-05-15 13:51:28.645704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.755 [2024-05-15 13:51:28.645719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.755 [2024-05-15 13:51:28.660539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.755 [2024-05-15 13:51:28.660623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.755 [2024-05-15 13:51:28.660640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.755 [2024-05-15 13:51:28.674323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.755 [2024-05-15 13:51:28.674362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.755 [2024-05-15 13:51:28.674391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.755 [2024-05-15 13:51:28.687034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.755 [2024-05-15 13:51:28.687087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.687117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.701134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.701218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.713275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.713315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.713329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.729227] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.729283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.729312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.743261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.743317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.743347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.757798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.757876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.757890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.769649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.769737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.769769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.784907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.784979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.785018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.798909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.799019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.799055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.811332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.811389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.811403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.825176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.825236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.825266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.837312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.837373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.837402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:15.756 [2024-05-15 13:51:28.852078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:15.756 [2024-05-15 13:51:28.852138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:15.756 [2024-05-15 13:51:28.852151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.866477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.866534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.866563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.880242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.880298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.880334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.891212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.891263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.891292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.904197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.904254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.904283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.916607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.916691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.916720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.929395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.929448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.929477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.942414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.942457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.942471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.957666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.957748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.957779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.972036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.972090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.972119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.985860] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.985915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.985945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:28.999546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:28.999598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:28.999640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:29.011316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:29.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:29.011401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:29.024444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:29.024505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:29.024520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:29.037414] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:29.037502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:29.037539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:29.051637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:29.051712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:29.051742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.015 [2024-05-15 13:51:29.065342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.015 [2024-05-15 13:51:29.065394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.015 [2024-05-15 13:51:29.065423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.016 [2024-05-15 13:51:29.077411] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.016 [2024-05-15 13:51:29.077464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.016 [2024-05-15 13:51:29.077493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.016 [2024-05-15 13:51:29.090495] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.016 [2024-05-15 13:51:29.090547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.016 [2024-05-15 13:51:29.090577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.016 [2024-05-15 13:51:29.104160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.016 [2024-05-15 13:51:29.104211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.016 [2024-05-15 13:51:29.104240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.116203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.116256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.116286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.129531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.129584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.129613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.143318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.143391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.143425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.156813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.156881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.156910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.169292] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.169344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.169373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.180698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.180752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.180781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.194501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.194558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.194588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.208525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.208580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.208610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.222218] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.222289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.222320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.233823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.233897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.233927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.249562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.249624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.249639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.263089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.263133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.263147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.277274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.277338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.277368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.291385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.291444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.291474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.303094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.303152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.303182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.317128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.317187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.317217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.331140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.331213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.331242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.343927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.343983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.343997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.275 [2024-05-15 13:51:29.359814] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.275 [2024-05-15 13:51:29.359871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.275 [2024-05-15 13:51:29.359886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.374351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.374458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.534 [2024-05-15 13:51:29.374474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.384869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.384926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.534 [2024-05-15 13:51:29.384958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.400423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.400469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.534 [2024-05-15 13:51:29.400483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.414012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.414069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.534 [2024-05-15 13:51:29.414099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.427822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.427878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.534 [2024-05-15 13:51:29.427908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.441020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.441073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.534 [2024-05-15 13:51:29.441102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.452490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.452529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.534 [2024-05-15 13:51:29.452543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.534 [2024-05-15 13:51:29.467086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.534 [2024-05-15 13:51:29.467144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.467174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.481333] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.481374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.481388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.495369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.495424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.495439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.509848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.509898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.509912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.523449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.523493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.523509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.536465] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.536507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.536521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.551363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.551430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.551464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.568402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.568469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.568494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.582913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.582986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:14984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.583011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.599610] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.599706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.599734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.535 [2024-05-15 13:51:29.616108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.535 [2024-05-15 13:51:29.616186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.535 [2024-05-15 13:51:29.616216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.632718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.632780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.632802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.649065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.649150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.649191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.664821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.664915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.664959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.686030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.686116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.686140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.701150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.701215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.701233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.716044] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.716106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.716128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.731773] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.731869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.731889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.748965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.749057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.749084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.766203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.766265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.766296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.784487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.784541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.784559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.799949] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.800010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.800039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.815082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.815161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.815190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.829652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.794 [2024-05-15 13:51:29.829710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.794 [2024-05-15 13:51:29.829728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.794 [2024-05-15 13:51:29.845620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.795 [2024-05-15 13:51:29.845683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.795 [2024-05-15 13:51:29.845711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.795 [2024-05-15 13:51:29.860195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.795 [2024-05-15 13:51:29.860255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.795 [2024-05-15 13:51:29.860280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:16.795 [2024-05-15 13:51:29.876632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:16.795 [2024-05-15 13:51:29.876692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:16.795 [2024-05-15 13:51:29.876709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.893387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.893453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.893483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.906968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.907018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.907035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.920609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.920689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.920720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.934711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.934767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.934797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.948876] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.948932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.948962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.962909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.962977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.962991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.975776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.975832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.975861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:29.990837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:29.990893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:29.990923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.004210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.004267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.004297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.019504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.019561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.019575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.030712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.030767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.030796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.045093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.045150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.045179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.059555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.059637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.059652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.070861] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.070917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.070947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.085590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.085655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.085686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.099920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.099982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.100013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.112900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.112984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.113015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.126161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.126218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.126251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.054 [2024-05-15 13:51:30.139652] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.054 [2024-05-15 13:51:30.139691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.054 [2024-05-15 13:51:30.139705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.314 [2024-05-15 13:51:30.154229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.314 [2024-05-15 13:51:30.154287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.314 [2024-05-15 13:51:30.154318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.314 [2024-05-15 13:51:30.166476] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.314 [2024-05-15 13:51:30.166534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.314 [2024-05-15 13:51:30.166564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.314 [2024-05-15 13:51:30.182177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.314 [2024-05-15 13:51:30.182246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.314 [2024-05-15 13:51:30.182261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.314 [2024-05-15 13:51:30.194042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.194104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.194135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.208978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.209055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.209085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.221715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.221773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.221803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.234701] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.234759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.234789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.246023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.246078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.246108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.259587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.259655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.259685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.274403] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.274458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.274488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.285469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.285523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.285553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.299434] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.299506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.299520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.314547] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.314589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.314615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.328272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.328313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.328335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.342442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.342509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.342540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.355048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.355104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.355134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.368879] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.368933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.368962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.381658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.381724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.381754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.394440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.394497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.394526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.315 [2024-05-15 13:51:30.406175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.315 [2024-05-15 13:51:30.406229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.315 [2024-05-15 13:51:30.406258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.584 [2024-05-15 13:51:30.420525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.584 [2024-05-15 13:51:30.420565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.584 [2024-05-15 13:51:30.420578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.584 [2024-05-15 13:51:30.434532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.584 [2024-05-15 13:51:30.434588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.584 [2024-05-15 13:51:30.434628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.584 [2024-05-15 13:51:30.486180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x136b690) 00:33:17.584 [2024-05-15 13:51:30.486247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:17.584 [2024-05-15 13:51:30.486277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:17.584 00:33:17.584 Latency(us) 00:33:17.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.584 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:17.584 nvme0n1 : 2.04 17865.98 69.79 0.00 0.00 7066.08 3678.95 46947.61 00:33:17.584 =================================================================================================================== 00:33:17.584 Total : 17865.98 69.79 0.00 0.00 7066.08 3678.95 46947.61 00:33:17.584 0 00:33:17.584 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:17.584 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:17.584 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:17.584 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:17.584 | .driver_specific 00:33:17.584 | .nvme_error 00:33:17.584 | .status_code 00:33:17.584 | .command_transient_transport_error' 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112809 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112809 ']' 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112809 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112809 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:17.842 killing process with pid 112809 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112809' 00:33:17.842 Received shutdown signal, test time was about 2.000000 seconds 00:33:17.842 00:33:17.842 Latency(us) 00:33:17.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.842 =================================================================================================================== 00:33:17.842 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112809 00:33:17.842 13:51:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112809 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112898 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112898 /var/tmp/bperf.sock 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112898 ']' 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:18.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:18.101 13:51:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:18.101 [2024-05-15 13:51:31.059167] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:18.101 [2024-05-15 13:51:31.059287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112898 ] 00:33:18.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:18.101 Zero copy mechanism will not be used. 00:33:18.101 [2024-05-15 13:51:31.177467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:18.101 [2024-05-15 13:51:31.192722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:18.359 [2024-05-15 13:51:31.285040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.295 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:19.295 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:19.295 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.295 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.553 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:19.553 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.553 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.553 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.553 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.553 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.812 nvme0n1 00:33:19.812 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:19.812 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.812 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.812 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.812 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:19.812 13:51:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.812 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:19.812 Zero copy mechanism will not be used. 00:33:19.812 Running I/O for 2 seconds... 00:33:19.812 [2024-05-15 13:51:32.880441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.812 [2024-05-15 13:51:32.880499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.812 [2024-05-15 13:51:32.880515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.812 [2024-05-15 13:51:32.885317] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.812 [2024-05-15 13:51:32.885362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.812 [2024-05-15 13:51:32.885376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.812 [2024-05-15 13:51:32.889480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.812 [2024-05-15 13:51:32.889522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.812 [2024-05-15 13:51:32.889536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.813 [2024-05-15 13:51:32.892692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.813 [2024-05-15 13:51:32.892733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.813 [2024-05-15 13:51:32.892747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:19.813 [2024-05-15 13:51:32.896878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.813 [2024-05-15 13:51:32.896917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.813 [2024-05-15 13:51:32.896947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:19.813 [2024-05-15 13:51:32.900497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.813 [2024-05-15 13:51:32.900538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.813 [2024-05-15 13:51:32.900552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:19.813 [2024-05-15 13:51:32.904583] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.813 [2024-05-15 13:51:32.904633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.813 [2024-05-15 13:51:32.904647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:19.813 [2024-05-15 13:51:32.909310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:19.813 [2024-05-15 13:51:32.909353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:19.813 [2024-05-15 13:51:32.909367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.912523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.912563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.912576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.916252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.916290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.916328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.920225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.920263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.920292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.924945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.924983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.925013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.930191] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.930243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.930257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.933853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.933899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.933946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.938133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.938172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.938202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.942168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.942212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.942226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.946851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.946892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.946922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.951377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.951419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.951449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.954228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.954266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.073 [2024-05-15 13:51:32.954295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.073 [2024-05-15 13:51:32.960033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.073 [2024-05-15 13:51:32.960077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.960091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.964306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.964356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.964370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.967828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.967869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.967883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.972389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.972431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.972445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.976561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.976614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.976629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.981011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.981051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.981065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.985581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.985641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.985656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.990089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.990131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.990146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.993558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.993610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.993625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:32.998115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:32.998156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:32.998170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.002298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.002338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.002352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.006776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.006815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.006829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.011438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.011485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.011515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.014676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.014714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.014729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.019011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.019052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.019067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.022997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.023036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.023050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.026900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.026949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.026963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.031284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.031325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.031339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.034554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.034594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.034622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.039060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.039106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.039120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.043371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.043422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.043436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.048354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.048395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.048409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.051705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.051744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.051757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.056069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.056108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.056137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.060908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.060967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.060996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.064195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.064249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.064279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.069050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.069106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.069150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.073658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.074 [2024-05-15 13:51:33.073722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.074 [2024-05-15 13:51:33.073752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.074 [2024-05-15 13:51:33.077602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.077654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.077669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.082213] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.082268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.082298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.085543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.085627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.085643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.090097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.090155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.090168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.095376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.095433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.095447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.098419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.098475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.098488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.102800] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.102840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.102853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.107128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.107182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.107211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.111636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.111688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.111716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.115777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.115830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.115860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.120063] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.120114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.120144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.124150] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.124202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.124237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.128260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.128312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.128366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.132428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.132467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.132488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.136230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.136284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.136313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.140176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.140228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.140257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.144752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.144792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.144805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.148852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.148920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.148949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.152832] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.152898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.152926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.156955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.157009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.157038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.161259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.161312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.161340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.075 [2024-05-15 13:51:33.165613] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.075 [2024-05-15 13:51:33.165679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.075 [2024-05-15 13:51:33.165714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.336 [2024-05-15 13:51:33.170038] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.336 [2024-05-15 13:51:33.170093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.336 [2024-05-15 13:51:33.170106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.336 [2024-05-15 13:51:33.174345] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.336 [2024-05-15 13:51:33.174415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.336 [2024-05-15 13:51:33.174444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.336 [2024-05-15 13:51:33.178215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.336 [2024-05-15 13:51:33.178266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.336 [2024-05-15 13:51:33.178295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.336 [2024-05-15 13:51:33.182065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.336 [2024-05-15 13:51:33.182120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.336 [2024-05-15 13:51:33.182149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.336 [2024-05-15 13:51:33.186378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.186434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.186448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.191342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.191398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.191426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.195953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.196007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.196036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.199041] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.199094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.199123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.203973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.204043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.204072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.207680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.207734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.207762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.211727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.211780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.211809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.215260] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.215347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.219667] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.219750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.219764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.224364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.224405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.224418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.227657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.227709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.227738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.231539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.231591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.231646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.235712] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.235766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.235795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.239831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.239887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.239900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.244166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.244220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.244249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.247939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.247991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.248034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.252281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.252360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.252375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.256021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.256089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.256101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.259485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.259539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.259550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.262593] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.262653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.262681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.266797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.266837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.266866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.271048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.271101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.271129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.275118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.275159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.275173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.278868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.278908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.278921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.283248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.283303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.283332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.287131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.287193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.287222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.292591] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.292648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.292668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.337 [2024-05-15 13:51:33.298365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.337 [2024-05-15 13:51:33.298421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.337 [2024-05-15 13:51:33.298467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.301831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.301887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.301916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.306276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.306341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.306370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.310050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.310091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.310105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.314913] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.314954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.314983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.319374] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.319422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.319451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.323106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.323154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.323183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.327554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.327638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.327658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.331921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.331974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.332002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.335082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.335136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.335164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.339404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.339457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.339486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.344056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.344112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.344124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.348587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.348641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.348655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.353389] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.353443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.353472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.356043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.356094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.356122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.360284] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.360365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.360379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.365158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.365211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.365240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.368459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.368499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.368512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.373033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.373089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.373118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.378074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.378128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.378158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.381888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.381957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.381986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.385236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.385290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.385319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.389953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.390008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.390036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.393214] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.393266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.393295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.397666] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.397749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.397778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.401892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.401962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.401992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.405256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.405310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.405338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.409302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.409354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.338 [2024-05-15 13:51:33.409382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.338 [2024-05-15 13:51:33.413574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.338 [2024-05-15 13:51:33.413641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.339 [2024-05-15 13:51:33.413671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.339 [2024-05-15 13:51:33.416715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.339 [2024-05-15 13:51:33.416753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.339 [2024-05-15 13:51:33.416781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.339 [2024-05-15 13:51:33.421053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.339 [2024-05-15 13:51:33.421107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.339 [2024-05-15 13:51:33.421135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.339 [2024-05-15 13:51:33.425574] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.339 [2024-05-15 13:51:33.425643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.339 [2024-05-15 13:51:33.425673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.339 [2024-05-15 13:51:33.429022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.339 [2024-05-15 13:51:33.429091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.339 [2024-05-15 13:51:33.429121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.598 [2024-05-15 13:51:33.433257] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.598 [2024-05-15 13:51:33.433342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.598 [2024-05-15 13:51:33.433371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.598 [2024-05-15 13:51:33.438165] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.598 [2024-05-15 13:51:33.438220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.598 [2024-05-15 13:51:33.438249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.598 [2024-05-15 13:51:33.441145] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.598 [2024-05-15 13:51:33.441195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.598 [2024-05-15 13:51:33.441224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.598 [2024-05-15 13:51:33.446313] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.598 [2024-05-15 13:51:33.446367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.598 [2024-05-15 13:51:33.446415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.598 [2024-05-15 13:51:33.451565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.598 [2024-05-15 13:51:33.451644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.598 [2024-05-15 13:51:33.451658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.598 [2024-05-15 13:51:33.454695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.598 [2024-05-15 13:51:33.454747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.598 [2024-05-15 13:51:33.454775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.598 [2024-05-15 13:51:33.459047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.598 [2024-05-15 13:51:33.459101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.598 [2024-05-15 13:51:33.459130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.463311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.463364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.463393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.466655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.466708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.466736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.471094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.471148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.471176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.475985] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.476043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.476073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.479522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.479575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.479605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.484395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.484435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.484449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.488771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.488824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.488852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.492615] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.492695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.492738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.497224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.497279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.497292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.502446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.502486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.502500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.507239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.507279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.507292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.511250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.511289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.511302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.514929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.514997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.515010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.519013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.519050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.519064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.523647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.523683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.523696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.526719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.526759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.526772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.531174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.531229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.531260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.536069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.536126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.536155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.539972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.540027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.540063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.543258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.543336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.543349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.547308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.547363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.547392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.552142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.552198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.552211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.556897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.556951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.556981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.560514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.560553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.560567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.564900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.564954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.564967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.569711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.599 [2024-05-15 13:51:33.569792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.599 [2024-05-15 13:51:33.569807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.599 [2024-05-15 13:51:33.574881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.574935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.574963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.577945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.577998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.578011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.582077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.582130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.582159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.586826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.586879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.586908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.590543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.590597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.590655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.594745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.594796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.594824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.598729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.598783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.598811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.603318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.603369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.603398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.606444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.606496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.606525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.611125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.611178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.611207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.615859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.615913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.615942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.620689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.620731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.620745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.624163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.624204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.624217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.628251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.628292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.628306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.633164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.633238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.633251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.636306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.636370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.636392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.641125] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.641185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.641199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.645370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.645429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.645442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.648882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.648922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.648935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.652820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.652861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.652874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.656113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.656153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.656167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.660364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.660405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.660419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.664713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.664753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.664767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.668972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.669017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.669048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.672518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.672559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.672572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.677540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.677597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.600 [2024-05-15 13:51:33.677639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.600 [2024-05-15 13:51:33.682841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.600 [2024-05-15 13:51:33.682883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.601 [2024-05-15 13:51:33.682896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.601 [2024-05-15 13:51:33.687562] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.601 [2024-05-15 13:51:33.687652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.601 [2024-05-15 13:51:33.687667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.601 [2024-05-15 13:51:33.690561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.601 [2024-05-15 13:51:33.690640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.601 [2024-05-15 13:51:33.690656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.601 [2024-05-15 13:51:33.695682] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.601 [2024-05-15 13:51:33.695735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.601 [2024-05-15 13:51:33.695750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.700190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.700245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.700274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.704554] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.704597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.704626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.709325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.709395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.709409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.712995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.713036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.713050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.717741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.717797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.717826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.721796] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.721866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.721895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.725896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.725967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.725980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.860 [2024-05-15 13:51:33.730463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.860 [2024-05-15 13:51:33.730535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.860 [2024-05-15 13:51:33.730564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.735089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.735144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.735158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.738992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.739031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.739044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.742685] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.742740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.742753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.747332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.747388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.747417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.752065] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.752120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.752149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.755294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.755349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.755363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.760409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.760450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.760464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.764179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.764232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.764261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.768543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.768582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.768621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.772987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.773058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.773088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.777130] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.777200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.777229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.781956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.782012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.782025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.786220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.786274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.786303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.790127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.790182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.790212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.795034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.795106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.795119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.799173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.799226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.799256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.803786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.803841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.803871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.807993] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.808047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.808077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.811904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.811959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.812004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.861 [2024-05-15 13:51:33.816872] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.861 [2024-05-15 13:51:33.816927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.861 [2024-05-15 13:51:33.816940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.821986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.822045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.822075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.825840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.825895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.825925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.830718] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.830768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.830782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.835314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.835368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.835382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.839043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.839096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.839125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.844059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.844112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.844141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.848527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.848567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.848581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.852841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.852911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.852941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.856863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.856917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.856946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.860818] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.860887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.860917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.865405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.865459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.865488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.868788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.868842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.868889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.873316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.873370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.873399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.878294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.878365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.878378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.883390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.883445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.883474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.887040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.887094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.887124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.891803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.891857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.862 [2024-05-15 13:51:33.891885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.862 [2024-05-15 13:51:33.895821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.862 [2024-05-15 13:51:33.895873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.895902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.899909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.899963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.899993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.903937] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.903992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.904006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.908881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.908920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.908934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.912620] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.912659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.912672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.916812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.916855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.916868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.920709] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.920760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.920806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.925111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.925164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.925193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.929028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.929082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.929111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.933951] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.933995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.934008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.938630] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.938707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.938737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.941325] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.941377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.941405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.946511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.946565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.946594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.950302] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.950402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.950431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:20.863 [2024-05-15 13:51:33.954485] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:20.863 [2024-05-15 13:51:33.954538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.863 [2024-05-15 13:51:33.954567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.959195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.959265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.959294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.963836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.963892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.963921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.967287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.967340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.967368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.971344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.971397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.971426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.975916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.975997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.979301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.979353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.979381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.983236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.983290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.983318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.987699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.987752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.987780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.991482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.991537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.991566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:33.995567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:33.995662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:33.995676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:34.000084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:34.000138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:34.000167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:34.005023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:34.005076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:34.005105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:34.009281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:34.009335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:34.009363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:34.012173] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.123 [2024-05-15 13:51:34.012225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.123 [2024-05-15 13:51:34.012253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.123 [2024-05-15 13:51:34.016836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.016889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.016918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.021351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.021403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.021431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.024947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.024999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.025043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.029223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.029277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.029305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.033740] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.033792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.033820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.037749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.037802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.037831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.040838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.040891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.040920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.045337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.045379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.045393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.050253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.050309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.050322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.053835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.053889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.053919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.058413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.058455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.058467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.062912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.062967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.062997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.066802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.066857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.066886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.071281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.071324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.071337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.076797] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.076837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.076850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.080098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.080139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.080153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.084750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.084795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.084809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.089468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.089509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.089523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.092694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.092734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.092748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.097113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.097159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.097173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.101342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.101387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.101401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.105097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.105137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.105151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.124 [2024-05-15 13:51:34.110085] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.124 [2024-05-15 13:51:34.110129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.124 [2024-05-15 13:51:34.110143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.115092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.115151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.115164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.118611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.118662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.118676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.122921] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.122976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.122990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.127813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.127870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.127883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.132371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.132412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.132436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.135436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.135500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.135513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.140059] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.140117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.140132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.144033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.144090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.144103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.147799] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.147857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.147871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.152311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.152364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.152379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.156450] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.156493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.156507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.160764] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.160805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.160818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.165190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.165250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.165263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.168824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.168865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.168879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.173619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.173675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.173689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.177747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.177789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.177803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.181900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.181942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.181955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.186804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.186845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.186859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.190364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.190420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.190434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.194413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.194472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.194486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.199446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.199491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.199504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.203795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.203835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.203848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.207424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.207480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.207509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.212117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.212173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.125 [2024-05-15 13:51:34.212203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.125 [2024-05-15 13:51:34.217113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.125 [2024-05-15 13:51:34.217168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.126 [2024-05-15 13:51:34.217198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.220751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.220792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.220804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.225272] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.225344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.225358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.230035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.230090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.230120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.233210] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.233265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.233294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.238064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.238136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.238165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.243169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.243225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.243254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.247526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.247580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.247610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.250930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.250984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.250997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.255135] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.255207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.255221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.259448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.259503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.259533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.263356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.263410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.263439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.267382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.267437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.267467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.271743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.271783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.271796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.275418] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.275475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.275488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.279529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.279600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.279613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.283431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.283487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.283501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.287519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.386 [2024-05-15 13:51:34.287575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.386 [2024-05-15 13:51:34.287604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.386 [2024-05-15 13:51:34.291944] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.291999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.292013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.295215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.295269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.295307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.299525] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.299582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.299596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.303952] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.304010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.304024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.307853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.307910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.307923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.312616] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.312667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.312680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.317281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.317341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.317372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.320067] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.320120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.320149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.325147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.325204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.325218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.329478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.329533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.329564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.333244] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.333300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.333330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.337935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.337991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.338021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.342719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.342772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.342802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.347514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.347568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.347599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.351351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.351409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.351422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.355886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.355942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.355971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.361073] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.361128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.361158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.365827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.365881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.365911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.368581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.368630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.368644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.373551] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.373591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.373625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.377636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.377674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.377688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.381082] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.381122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.381135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.385488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.385529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.385543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.389369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.389410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.389424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.393790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.393830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.393843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.397998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.398038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.398052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.402970] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.403029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.403042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.387 [2024-05-15 13:51:34.406376] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.387 [2024-05-15 13:51:34.406441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.387 [2024-05-15 13:51:34.406455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.411111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.411168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.411181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.415217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.415273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.415287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.419301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.419358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.419372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.423841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.423896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.423910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.428094] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.428166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.428180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.431998] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.432054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.432068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.436334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.436391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.436404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.440394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.440434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.440447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.444859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.444897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.444910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.450118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.450157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.450187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.454625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.454690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.454720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.458235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.458275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.458289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.462523] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.462582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.462595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.467092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.467147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.467177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.471452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.471507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.471536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.475971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.476027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.476057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.388 [2024-05-15 13:51:34.479837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.388 [2024-05-15 13:51:34.479909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.388 [2024-05-15 13:51:34.479939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.484614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.484665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.484679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.488211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.488267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.488280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.492578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.492635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.492649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.497787] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.497829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.497842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.501343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.501403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.501433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.505553] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.505638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.505653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.510600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.510665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.510696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.513840] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.513896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.513909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.518294] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.518350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.518380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.522672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.522726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.522755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.526301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.526342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.526371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.530646] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.530700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.530713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.534457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.534513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.534543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.538246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.538285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.538315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.542478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.542533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.542563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.546451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.546506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.546536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.550968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.551024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.551055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.650 [2024-05-15 13:51:34.554275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.650 [2024-05-15 13:51:34.554327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.650 [2024-05-15 13:51:34.554373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.558487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.558542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.558571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.562954] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.562995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.563008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.567037] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.567077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.567090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.571557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.571598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.571626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.574480] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.574531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.574545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.578657] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.578694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.578718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.584201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.584245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.584259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.587315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.587354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.587368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.592113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.592154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.592168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.596419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.596460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.596473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.600943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.600982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.600995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.604680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.604726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.604740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.609817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.609858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.609871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.614888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.614930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.614943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.618359] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.618414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.618427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.623074] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.623122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.623136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.627831] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.627872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.627885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.631513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.631567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.631580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.635035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.635090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.635120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.639369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.639408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.639438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.643871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.643926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.643957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.647228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.647298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.647312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.653726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.653788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.653826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.658716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.658773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.658788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.663061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.663116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.663146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.666889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.666945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.666959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.671808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.671849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.651 [2024-05-15 13:51:34.671862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.651 [2024-05-15 13:51:34.676288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.651 [2024-05-15 13:51:34.676353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.676368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.680674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.680717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.680731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.684684] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.684725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.684738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.688711] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.688753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.688767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.693423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.693480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.693510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.696802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.696844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.696857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.701282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.701323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.701337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.705897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.705938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.705951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.710258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.710298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.710328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.714423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.714477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.714506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.718468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.718509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.718523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.722829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.722884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.722899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.726415] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.726471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.726500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.731319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.731374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.731403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.735239] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.735294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.735324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.739266] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.739324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.739354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.652 [2024-05-15 13:51:34.744097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.652 [2024-05-15 13:51:34.744146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.652 [2024-05-15 13:51:34.744159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.748385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.748424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.748437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.751515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.751554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.751568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.755841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.755883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.755897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.760158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.760199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.760213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.764428] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.764469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.764482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.767675] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.767715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.767728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.772133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.772174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.772188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.776171] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.776212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.776225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.779625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.779664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.779677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.784223] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.784282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.784295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.789247] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.789327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.789341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.792018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.792071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.792101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.797582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.797637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.797651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.801144] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.801200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.801213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.805907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.805947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.805960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.810830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.810871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.810884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.814226] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.814266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.814280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.818497] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.818537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.818551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.823521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.823563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.823576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.828856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.828896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.948 [2024-05-15 13:51:34.828910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.948 [2024-05-15 13:51:34.832205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.948 [2024-05-15 13:51:34.832259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.832273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.836388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.836429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.836442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.841052] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.841093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.841107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.844768] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.844807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.844820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.848956] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.848997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.849011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.854053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.854095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.854109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.857202] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.857244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.857258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.862015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.862060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.862073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.866532] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.866574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.866588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.870093] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.870148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.870161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.949 [2024-05-15 13:51:34.874927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1df6630) 00:33:21.949 [2024-05-15 13:51:34.874985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.949 [2024-05-15 13:51:34.874999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:21.949 00:33:21.949 Latency(us) 00:33:21.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.949 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:21.949 nvme0n1 : 2.00 7332.66 916.58 0.00 0.00 2177.49 603.23 6702.55 00:33:21.949 =================================================================================================================== 00:33:21.949 Total : 7332.66 916.58 0.00 0.00 2177.49 603.23 6702.55 00:33:21.949 0 00:33:21.949 13:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:21.949 13:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:21.949 13:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:21.949 13:51:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:21.949 | .driver_specific 00:33:21.949 | .nvme_error 00:33:21.949 | .status_code 00:33:21.949 | .command_transient_transport_error' 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 473 > 0 )) 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112898 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112898 ']' 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112898 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112898 00:33:22.207 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:22.208 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:22.208 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112898' 00:33:22.208 killing process with pid 112898 00:33:22.208 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112898 00:33:22.208 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.208 00:33:22.208 Latency(us) 00:33:22.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.208 =================================================================================================================== 00:33:22.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.208 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112898 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=112984 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 112984 /var/tmp/bperf.sock 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 112984 ']' 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:22.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:22.466 13:51:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:22.466 [2024-05-15 13:51:35.475864] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:22.466 [2024-05-15 13:51:35.475976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112984 ] 00:33:22.724 [2024-05-15 13:51:35.594440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:22.724 [2024-05-15 13:51:35.609868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.724 [2024-05-15 13:51:35.706870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.661 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:23.661 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.662 13:51:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:24.229 nvme0n1 00:33:24.229 13:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:24.229 13:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:24.229 13:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:24.229 13:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:24.229 13:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:24.229 13:51:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:24.229 Running I/O for 2 seconds... 00:33:24.229 [2024-05-15 13:51:37.226220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ee5c8 00:33:24.229 [2024-05-15 13:51:37.227188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.227244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.237529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e2c28 00:33:24.229 [2024-05-15 13:51:37.238265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.238303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.251649] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f7538 00:33:24.229 [2024-05-15 13:51:37.253367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.253420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.262336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e2c28 00:33:24.229 [2024-05-15 13:51:37.263887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.263936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.270563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fef90 00:33:24.229 [2024-05-15 13:51:37.271342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.271395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.284273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fa7d8 00:33:24.229 [2024-05-15 13:51:37.285663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.285715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.295911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f5378 00:33:24.229 [2024-05-15 13:51:37.296876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.296915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.306833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e2c28 00:33:24.229 [2024-05-15 13:51:37.307661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.307699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:24.229 [2024-05-15 13:51:37.320464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4de8 00:33:24.229 [2024-05-15 13:51:37.322395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.229 [2024-05-15 13:51:37.322432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.328851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fa3a0 00:33:24.489 [2024-05-15 13:51:37.329622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.329660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.342436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5658 00:33:24.489 [2024-05-15 13:51:37.344086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.344140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.353197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e0ea0 00:33:24.489 [2024-05-15 13:51:37.354524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.354577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.364272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e8d30 00:33:24.489 [2024-05-15 13:51:37.365558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.365617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.375694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f0350 00:33:24.489 [2024-05-15 13:51:37.376526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.376564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.386554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e3060 00:33:24.489 [2024-05-15 13:51:37.387282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.387335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.399879] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e27f0 00:33:24.489 [2024-05-15 13:51:37.401377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.401415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.411066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eaef0 00:33:24.489 [2024-05-15 13:51:37.412365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.412404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.422086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fb048 00:33:24.489 [2024-05-15 13:51:37.423231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.423269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.433136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190df550 00:33:24.489 [2024-05-15 13:51:37.434106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.434144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.445360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f7100 00:33:24.489 [2024-05-15 13:51:37.446669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:10081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.446708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.457247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e6b70 00:33:24.489 [2024-05-15 13:51:37.458540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.458577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.470831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f5be8 00:33:24.489 [2024-05-15 13:51:37.472779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.472814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.479181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f9f68 00:33:24.489 [2024-05-15 13:51:37.480169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.480206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.491193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ecc78 00:33:24.489 [2024-05-15 13:51:37.492360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.492395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.502806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fbcf0 00:33:24.489 [2024-05-15 13:51:37.503482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.503518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.514316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190feb58 00:33:24.489 [2024-05-15 13:51:37.515327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.515379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.525390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eaab8 00:33:24.489 [2024-05-15 13:51:37.526250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.489 [2024-05-15 13:51:37.526289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.489 [2024-05-15 13:51:37.539232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f3a28 00:33:24.489 [2024-05-15 13:51:37.540942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.490 [2024-05-15 13:51:37.540980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:24.490 [2024-05-15 13:51:37.549522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eff18 00:33:24.490 [2024-05-15 13:51:37.550356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.490 [2024-05-15 13:51:37.550397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:24.490 [2024-05-15 13:51:37.561234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ebfd0 00:33:24.490 [2024-05-15 13:51:37.562572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.490 [2024-05-15 13:51:37.562652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:24.490 [2024-05-15 13:51:37.573055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f3a28 00:33:24.490 [2024-05-15 13:51:37.573934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.490 [2024-05-15 13:51:37.573971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:24.490 [2024-05-15 13:51:37.584076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fbcf0 00:33:24.490 [2024-05-15 13:51:37.584846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.490 [2024-05-15 13:51:37.584886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.597489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5ec8 00:33:24.749 [2024-05-15 13:51:37.599314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.599364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.605816] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190df550 00:33:24.749 [2024-05-15 13:51:37.606709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.606777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.617563] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190efae0 00:33:24.749 [2024-05-15 13:51:37.618676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.618742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.631465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e9e10 00:33:24.749 [2024-05-15 13:51:37.633231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.633285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.643524] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fa3a0 00:33:24.749 [2024-05-15 13:51:37.645204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.645258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.654534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190dece0 00:33:24.749 [2024-05-15 13:51:37.656063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.656116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.663041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e0ea0 00:33:24.749 [2024-05-15 13:51:37.663784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.663819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.674900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fb480 00:33:24.749 [2024-05-15 13:51:37.675648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.749 [2024-05-15 13:51:37.675685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:24.749 [2024-05-15 13:51:37.688634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e3d08 00:33:24.750 [2024-05-15 13:51:37.689501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.689538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.699584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fb480 00:33:24.750 [2024-05-15 13:51:37.700430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.700466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.710617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f5378 00:33:24.750 [2024-05-15 13:51:37.711928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.711986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.722004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e9168 00:33:24.750 [2024-05-15 13:51:37.723118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.723170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.736145] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f1868 00:33:24.750 [2024-05-15 13:51:37.737928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.737968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.744726] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f31b8 00:33:24.750 [2024-05-15 13:51:37.745517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.745548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.759079] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fd640 00:33:24.750 [2024-05-15 13:51:37.760572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.760623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.771251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f6890 00:33:24.750 [2024-05-15 13:51:37.772242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.772313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.782736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5ec8 00:33:24.750 [2024-05-15 13:51:37.783592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.783639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.793970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ef6a8 00:33:24.750 [2024-05-15 13:51:37.794618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.794666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.807306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fcdd0 00:33:24.750 [2024-05-15 13:51:37.808752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.808787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.818579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eff18 00:33:24.750 [2024-05-15 13:51:37.819872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.819924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.829786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190dfdc0 00:33:24.750 [2024-05-15 13:51:37.830902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.830937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:24.750 [2024-05-15 13:51:37.843944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ea680 00:33:24.750 [2024-05-15 13:51:37.846009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:24.750 [2024-05-15 13:51:37.846076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.009 [2024-05-15 13:51:37.852452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4140 00:33:25.009 [2024-05-15 13:51:37.853466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.009 [2024-05-15 13:51:37.853502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:25.009 [2024-05-15 13:51:37.864086] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de8a8 00:33:25.009 [2024-05-15 13:51:37.865144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.865193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.875247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ef6a8 00:33:25.010 [2024-05-15 13:51:37.876131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.876167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.889790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5ec8 00:33:25.010 [2024-05-15 13:51:37.891501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.891541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.898447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f4b08 00:33:25.010 [2024-05-15 13:51:37.899180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.899219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.910637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f6cc8 00:33:25.010 [2024-05-15 13:51:37.911336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.911373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.924686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190dece0 00:33:25.010 [2024-05-15 13:51:37.925615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.925663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.936558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f0350 00:33:25.010 [2024-05-15 13:51:37.937835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.937884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.948001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e73e0 00:33:25.010 [2024-05-15 13:51:37.949122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.949172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.959137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f9b30 00:33:25.010 [2024-05-15 13:51:37.960048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.960100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.970455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f81e0 00:33:25.010 [2024-05-15 13:51:37.971033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.971081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.983878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ff3c8 00:33:25.010 [2024-05-15 13:51:37.985246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.985299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:37.994940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fb8b8 00:33:25.010 [2024-05-15 13:51:37.996318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:37.996381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.006243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f0350 00:33:25.010 [2024-05-15 13:51:38.007469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.007520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.017371] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e9e10 00:33:25.010 [2024-05-15 13:51:38.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.018720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.029737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ec840 00:33:25.010 [2024-05-15 13:51:38.031392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.031441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.038074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f9f68 00:33:25.010 [2024-05-15 13:51:38.038900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.038951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.049497] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5658 00:33:25.010 [2024-05-15 13:51:38.050291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.050357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.062264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4de8 00:33:25.010 [2024-05-15 13:51:38.063845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.063898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.073802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190df118 00:33:25.010 [2024-05-15 13:51:38.075094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.075151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.085868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e9168 00:33:25.010 [2024-05-15 13:51:38.087124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.087176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:25.010 [2024-05-15 13:51:38.096892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fdeb0 00:33:25.010 [2024-05-15 13:51:38.098060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.010 [2024-05-15 13:51:38.098110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.110958] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190df550 00:33:25.269 [2024-05-15 13:51:38.112905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.112955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.119313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e3498 00:33:25.269 [2024-05-15 13:51:38.120332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.120385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.131156] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eb328 00:33:25.269 [2024-05-15 13:51:38.132115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.132152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.143367] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f3e60 00:33:25.269 [2024-05-15 13:51:38.144028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.144066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.156942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e2c28 00:33:25.269 [2024-05-15 13:51:38.158354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.158392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.166429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eea00 00:33:25.269 [2024-05-15 13:51:38.167214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.167252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.181033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fda78 00:33:25.269 [2024-05-15 13:51:38.182803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.182842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.189039] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f6cc8 00:33:25.269 [2024-05-15 13:51:38.189841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.189881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.269 [2024-05-15 13:51:38.203214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e6b70 00:33:25.269 [2024-05-15 13:51:38.204728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.269 [2024-05-15 13:51:38.204791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.214178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de8a8 00:33:25.270 [2024-05-15 13:51:38.215423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.215477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.225440] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190df988 00:33:25.270 [2024-05-15 13:51:38.226680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.226730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.239355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4140 00:33:25.270 [2024-05-15 13:51:38.241211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.241260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.247674] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fef90 00:33:25.270 [2024-05-15 13:51:38.248577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.248619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.261729] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f8e88 00:33:25.270 [2024-05-15 13:51:38.263078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.263117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.270897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fcdd0 00:33:25.270 [2024-05-15 13:51:38.271629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.271674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.283884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f8618 00:33:25.270 [2024-05-15 13:51:38.284845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.284896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.294554] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f1430 00:33:25.270 [2024-05-15 13:51:38.295773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.295840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.305788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e6300 00:33:25.270 [2024-05-15 13:51:38.306837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.306888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.319629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f9f68 00:33:25.270 [2024-05-15 13:51:38.321410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.321446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.327904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fc128 00:33:25.270 [2024-05-15 13:51:38.328692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.328728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.341808] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f6cc8 00:33:25.270 [2024-05-15 13:51:38.343048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.343098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.354797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ed4e8 00:33:25.270 [2024-05-15 13:51:38.356519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.270 [2024-05-15 13:51:38.356555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.270 [2024-05-15 13:51:38.366019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fe720 00:33:25.529 [2024-05-15 13:51:38.367601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.367664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.377386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f1ca0 00:33:25.529 [2024-05-15 13:51:38.379000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.379050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.387874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f8a50 00:33:25.529 [2024-05-15 13:51:38.389751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.389804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.400016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f20d8 00:33:25.529 [2024-05-15 13:51:38.401040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.401092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.410878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4140 00:33:25.529 [2024-05-15 13:51:38.411689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.411740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.422000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de8a8 00:33:25.529 [2024-05-15 13:51:38.422705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.422746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.432482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f8e88 00:33:25.529 [2024-05-15 13:51:38.433275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.433314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.446430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190feb58 00:33:25.529 [2024-05-15 13:51:38.447937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.447977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.458449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190dece0 00:33:25.529 [2024-05-15 13:51:38.460076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.460116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.468859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e49b0 00:33:25.529 [2024-05-15 13:51:38.470859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.470912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.478950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ebfd0 00:33:25.529 [2024-05-15 13:51:38.479799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.479849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.492717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f4298 00:33:25.529 [2024-05-15 13:51:38.494083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.494135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.503574] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e1b48 00:33:25.529 [2024-05-15 13:51:38.504731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.504767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.515629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fef90 00:33:25.529 [2024-05-15 13:51:38.517128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.517181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.526831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fd208 00:33:25.529 [2024-05-15 13:51:38.528112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.528167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.538311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fef90 00:33:25.529 [2024-05-15 13:51:38.539523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.539559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.552640] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e3060 00:33:25.529 [2024-05-15 13:51:38.554459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.554498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.561118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5220 00:33:25.529 [2024-05-15 13:51:38.562035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.562085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.575198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de470 00:33:25.529 [2024-05-15 13:51:38.576886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.576922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.587164] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f46d0 00:33:25.529 [2024-05-15 13:51:38.588760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.588796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.597700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e1f80 00:33:25.529 [2024-05-15 13:51:38.599031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.599068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.609100] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f6cc8 00:33:25.529 [2024-05-15 13:51:38.610379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.610431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:25.529 [2024-05-15 13:51:38.620882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f6890 00:33:25.529 [2024-05-15 13:51:38.621697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.529 [2024-05-15 13:51:38.621735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.632400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4578 00:33:25.789 [2024-05-15 13:51:38.633095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.633149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.645208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ebfd0 00:33:25.789 [2024-05-15 13:51:38.646643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.646717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.657025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4578 00:33:25.789 [2024-05-15 13:51:38.658765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.658818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.665038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e27f0 00:33:25.789 [2024-05-15 13:51:38.665897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.665948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.678588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fbcf0 00:33:25.789 [2024-05-15 13:51:38.680103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.680155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.688957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f92c0 00:33:25.789 [2024-05-15 13:51:38.690333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.690386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.700308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fbcf0 00:33:25.789 [2024-05-15 13:51:38.701505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.701553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.714252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e27f0 00:33:25.789 [2024-05-15 13:51:38.716082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.716117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.722703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4578 00:33:25.789 [2024-05-15 13:51:38.723592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.723665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.737189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ed4e8 00:33:25.789 [2024-05-15 13:51:38.738799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.738841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.747921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f4298 00:33:25.789 [2024-05-15 13:51:38.749296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.749349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.759508] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fc998 00:33:25.789 [2024-05-15 13:51:38.760645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.760679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.770326] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e73e0 00:33:25.789 [2024-05-15 13:51:38.771271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.771309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.781523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e6b70 00:33:25.789 [2024-05-15 13:51:38.782353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.782410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.795328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fa7d8 00:33:25.789 [2024-05-15 13:51:38.796398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:20721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.796437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.806135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e0630 00:33:25.789 [2024-05-15 13:51:38.806961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.807016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.816798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e4578 00:33:25.789 [2024-05-15 13:51:38.817485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.817522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.831221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e1710 00:33:25.789 [2024-05-15 13:51:38.833100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.833136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.839540] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e7818 00:33:25.789 [2024-05-15 13:51:38.840497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.840534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.853706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e8088 00:33:25.789 [2024-05-15 13:51:38.855305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.855341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.864645] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e3d08 00:33:25.789 [2024-05-15 13:51:38.866043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.866079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:25.789 [2024-05-15 13:51:38.875922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ea680 00:33:25.789 [2024-05-15 13:51:38.877114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:25.789 [2024-05-15 13:51:38.877150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.889733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eff18 00:33:26.049 [2024-05-15 13:51:38.891660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.891712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.898032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f46d0 00:33:26.049 [2024-05-15 13:51:38.899052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.899103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.909722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190eaab8 00:33:26.049 [2024-05-15 13:51:38.910766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.910816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.923344] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de470 00:33:26.049 [2024-05-15 13:51:38.925036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.925087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.931668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fb8b8 00:33:26.049 [2024-05-15 13:51:38.932376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.932413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.945894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ddc00 00:33:26.049 [2024-05-15 13:51:38.947275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.947326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.956805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f8a50 00:33:26.049 [2024-05-15 13:51:38.958019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.958071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.967820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ed920 00:33:26.049 [2024-05-15 13:51:38.968928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.968980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.981751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f4b08 00:33:26.049 [2024-05-15 13:51:38.983509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.983558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:38.989978] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190dece0 00:33:26.049 [2024-05-15 13:51:38.990827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:38.990881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.002137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fd208 00:33:26.049 [2024-05-15 13:51:39.002933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.002971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.015831] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f5378 00:33:26.049 [2024-05-15 13:51:39.017465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.017503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.026943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5658 00:33:26.049 [2024-05-15 13:51:39.028201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.028253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.038082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f9b30 00:33:26.049 [2024-05-15 13:51:39.039247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.039299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.052193] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de038 00:33:26.049 [2024-05-15 13:51:39.054000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.054037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.060178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de470 00:33:26.049 [2024-05-15 13:51:39.060921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.060959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.074306] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e3d08 00:33:26.049 [2024-05-15 13:51:39.075948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.075998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.082482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e9168 00:33:26.049 [2024-05-15 13:51:39.083376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.083412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:26.049 [2024-05-15 13:51:39.096005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190de038 00:33:26.049 [2024-05-15 13:51:39.097561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.049 [2024-05-15 13:51:39.097636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:26.050 [2024-05-15 13:51:39.107162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e0630 00:33:26.050 [2024-05-15 13:51:39.108722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.050 [2024-05-15 13:51:39.108758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:26.050 [2024-05-15 13:51:39.117766] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f57b0 00:33:26.050 [2024-05-15 13:51:39.119150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.050 [2024-05-15 13:51:39.119201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:26.050 [2024-05-15 13:51:39.128516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fac10 00:33:26.050 [2024-05-15 13:51:39.129762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.050 [2024-05-15 13:51:39.129811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:26.050 [2024-05-15 13:51:39.140942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e49b0 00:33:26.050 [2024-05-15 13:51:39.142663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.050 [2024-05-15 13:51:39.142728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.307 [2024-05-15 13:51:39.151388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ea680 00:33:26.307 [2024-05-15 13:51:39.152282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.307 [2024-05-15 13:51:39.152320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:26.308 [2024-05-15 13:51:39.163396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190fbcf0 00:33:26.308 [2024-05-15 13:51:39.164760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.308 [2024-05-15 13:51:39.164795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:26.308 [2024-05-15 13:51:39.174206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190e5658 00:33:26.308 [2024-05-15 13:51:39.175419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.308 [2024-05-15 13:51:39.175456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:26.308 [2024-05-15 13:51:39.185825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190ea248 00:33:26.308 [2024-05-15 13:51:39.187023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.308 [2024-05-15 13:51:39.187072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:26.308 [2024-05-15 13:51:39.199403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190efae0 00:33:26.308 [2024-05-15 13:51:39.201222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.308 [2024-05-15 13:51:39.201271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:26.308 [2024-05-15 13:51:39.209434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e270) with pdu=0x2000190f8a50 00:33:26.308 [2024-05-15 13:51:39.210399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:26.308 [2024-05-15 13:51:39.210433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:26.308 00:33:26.308 Latency(us) 00:33:26.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.308 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.308 nvme0n1 : 2.00 21902.85 85.56 0.00 0.00 5833.36 2308.65 15847.80 00:33:26.308 =================================================================================================================== 00:33:26.308 Total : 21902.85 85.56 0.00 0.00 5833.36 2308.65 15847.80 00:33:26.308 0 00:33:26.308 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:26.308 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:26.308 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:26.308 | .driver_specific 00:33:26.308 | .nvme_error 00:33:26.308 | .status_code 00:33:26.308 | .command_transient_transport_error' 00:33:26.308 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 172 > 0 )) 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 112984 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112984 ']' 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112984 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112984 00:33:26.579 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:26.580 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:26.580 killing process with pid 112984 00:33:26.580 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112984' 00:33:26.580 Received shutdown signal, test time was about 2.000000 seconds 00:33:26.580 00:33:26.580 Latency(us) 00:33:26.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.580 =================================================================================================================== 00:33:26.580 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.580 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112984 00:33:26.580 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112984 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=113069 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 113069 /var/tmp/bperf.sock 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 113069 ']' 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:26.838 13:51:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.838 [2024-05-15 13:51:39.756480] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:26.838 [2024-05-15 13:51:39.756592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113069 ] 00:33:26.838 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:26.838 Zero copy mechanism will not be used. 00:33:26.838 [2024-05-15 13:51:39.879471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:26.838 [2024-05-15 13:51:39.898553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.096 [2024-05-15 13:51:39.991888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.663 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:27.663 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:33:27.663 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:27.663 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:27.922 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:27.922 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.922 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.922 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.922 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.922 13:51:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:28.180 nvme0n1 00:33:28.440 13:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:28.440 13:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.440 13:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:28.440 13:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.440 13:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:28.440 13:51:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:28.440 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:28.440 Zero copy mechanism will not be used. 00:33:28.440 Running I/O for 2 seconds... 00:33:28.440 [2024-05-15 13:51:41.403294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.403596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.403664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.408417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.408722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.408757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.413503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.413803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.413837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.418630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.418914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.418947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.423695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.423993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.424026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.428930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.429216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.429252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.434023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.434308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.434342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.439139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.439427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.439460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.444247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.444558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.444591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.449355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.440 [2024-05-15 13:51:41.449665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.440 [2024-05-15 13:51:41.449700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.440 [2024-05-15 13:51:41.454436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.454733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.454765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.459491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.459788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.459820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.464548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.464854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.464887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.469663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.469949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.469992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.474678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.474964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.474996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.479754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.480038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.480071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.484845] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.485130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.485163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.489932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.490227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.490261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.495026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.495309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.495342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.500191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.500498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.500531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.505294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.505585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.505628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.510364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.510674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.510707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.515474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.515773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.515805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.520564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.520862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.520896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.525638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.525922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.525965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.530708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.530995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.531026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.441 [2024-05-15 13:51:41.535784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.441 [2024-05-15 13:51:41.536070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.441 [2024-05-15 13:51:41.536102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.540935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.541220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.541257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.546035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.546319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.546355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.551108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.551394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.551431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.556243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.556556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.556592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.561422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.561720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.561752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.566490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.566799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.566834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.571579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.571877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.571912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.576654] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.576939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.576972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.581744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.582031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.582065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.586814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.587124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.587158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.591948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.592234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.592269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.597078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.597362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.597397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.602342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.602643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.602676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.607423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.607725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.607759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.612548] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.612845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.612879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.617619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.617903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.617938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.622719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.623004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.623037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.627802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.628097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.628132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.632889] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.633175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.633212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.637949] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.638235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.638272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.643045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.643337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.643372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.648142] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.648437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.648472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.653246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.653528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.653563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.658304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.658592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.658638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.702 [2024-05-15 13:51:41.663366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.702 [2024-05-15 13:51:41.663666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.702 [2024-05-15 13:51:41.663701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.668448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.668750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.668786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.673619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.673902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.673937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.678683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.678968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.679003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.683803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.684098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.684132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.688891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.689174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.689209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.693970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.694256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.694291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.699044] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.699325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.699360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.704132] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.704425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.704460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.709255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.709554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.709588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.714358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.714658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.714693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.719425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.719737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.719769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.724578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.724874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.724909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.729632] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.729919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.729953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.734662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.734946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.734981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.739746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.740031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.740066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.744899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.745195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.745231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.750110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.750426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.750459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.755301] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.755598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.755651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.760393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.760692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.760727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.765560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.765860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.765894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.770634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.770919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.770955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.775673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.775972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.776007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.780846] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.781134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.781173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.785972] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.786314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.786356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.791167] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.791464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.791500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.703 [2024-05-15 13:51:41.796273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.703 [2024-05-15 13:51:41.796571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.703 [2024-05-15 13:51:41.796624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.801409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.801721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.801755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.806586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.806901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.806937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.811687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.811971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.812005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.816738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.817024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.817059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.821897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.822177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.822210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.827007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.827291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.827319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.832088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.832384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.832416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.837293] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.837587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.963 [2024-05-15 13:51:41.837637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.963 [2024-05-15 13:51:41.842564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.963 [2024-05-15 13:51:41.842876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.842912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.847763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.848048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.852965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.853262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.853298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.858197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.858511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.858547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.863346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.863647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.863679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.868539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.868849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.868884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.873706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.873991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.874026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.878843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.879134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.879169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.883966] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.884249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.884284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.889050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.889336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.889371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.894163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.894446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.894481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.899285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.899570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.899616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.904358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.904656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.904689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.909450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.909752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.909783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.914559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.914862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.914895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.919683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.919969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.920005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.924819] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.925103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.925138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.929867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.930152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.930187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.934911] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.935197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.935231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.939969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.940252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.940287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.945097] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.945392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.945427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.950214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.950498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.950533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.955320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.955631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.955666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.960513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.960810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.960842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.965581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.965892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.965924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.970623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.970933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.970968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.975828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.976140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.976175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.980913] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.981198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.981234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.986018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.986302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.986338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.991056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.991355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.991390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:41.996234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:41.996547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:41.996583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.001334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.001643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.001678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.006532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.006829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.006867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.011762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.012082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.012117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.016993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.017290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.017323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.022150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.022447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.022480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.027210] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.027496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.027531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.032350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.032662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.032697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.037480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.037779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.037814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.042579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.042882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.042917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.047691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.047974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.048009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.052835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.053136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.053171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:28.964 [2024-05-15 13:51:42.057959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:28.964 [2024-05-15 13:51:42.058255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:28.964 [2024-05-15 13:51:42.058289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.223 [2024-05-15 13:51:42.063059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.223 [2024-05-15 13:51:42.063341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.223 [2024-05-15 13:51:42.063377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.223 [2024-05-15 13:51:42.068155] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.223 [2024-05-15 13:51:42.068449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.223 [2024-05-15 13:51:42.068484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.223 [2024-05-15 13:51:42.073227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.223 [2024-05-15 13:51:42.073511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.223 [2024-05-15 13:51:42.073547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.223 [2024-05-15 13:51:42.078307] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.223 [2024-05-15 13:51:42.078590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.223 [2024-05-15 13:51:42.078641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.223 [2024-05-15 13:51:42.083392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.223 [2024-05-15 13:51:42.083695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.223 [2024-05-15 13:51:42.083730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.223 [2024-05-15 13:51:42.088471] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.088778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.088812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.093660] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.093944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.093979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.098885] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.099201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.099236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.104027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.104325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.104370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.109185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.109484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.109520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.114296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.114608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.114654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.119416] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.119728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.119761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.124557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.124854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.124889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.129668] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.129982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.130014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.134857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.135143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.135178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.140037] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.140362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.140396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.145199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.145494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.145530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.150339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.150649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.150684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.155463] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.155777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.155810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.160541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.160848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.160883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.165778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.166064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.166099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.170815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.171101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.171136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.175903] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.176200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.176237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.180982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.181264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.181297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.186108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.186392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.186429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.191257] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.191567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.191615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.196462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.196760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.196793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.201559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.201869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.201904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.206641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.206925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.206959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.211711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.211994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.212031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.216905] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.217189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.217226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.221928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.222242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.224 [2024-05-15 13:51:42.222281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.224 [2024-05-15 13:51:42.227074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.224 [2024-05-15 13:51:42.227360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.227396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.232138] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.232445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.232478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.237355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.237666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.237698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.242512] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.242814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.242848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.247616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.247897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.247932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.252678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.252961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.252995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.257756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.258041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.258076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.262875] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.263173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.263208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.268007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.268291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.268326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.273111] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.273410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.273456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.278261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.278544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.278579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.283360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.283661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.283695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.288486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.288810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.288842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.293637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.293924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.293956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.298803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.299085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.299119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.303872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.304154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.304189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.308991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.309276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.309308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.314056] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.314339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.314374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.225 [2024-05-15 13:51:42.319178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.225 [2024-05-15 13:51:42.319464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.225 [2024-05-15 13:51:42.319499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.324239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.324531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.324566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.329409] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.329709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.329743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.334529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.334831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.334863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.339586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.339883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.339915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.344669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.344954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.344989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.349828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.350111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.350145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.354861] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.355144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.355177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.485 [2024-05-15 13:51:42.359927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.485 [2024-05-15 13:51:42.360209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.485 [2024-05-15 13:51:42.360245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.365003] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.365288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.365321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.370125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.370410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.370447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.375252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.375537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.375575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.380311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.380622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.380655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.385434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.385733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.385768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.390449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.390745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.390777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.395549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.395853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.395891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.400641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.400924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.400960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.405690] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.405976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.406009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.410751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.411036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.411071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.415847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.416133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.416165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.420945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.421228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.421262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.426034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.426317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.426350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.431165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.431450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.431486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.436339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.436641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.436677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.441395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.441694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.441727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.446448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.446748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.446785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.451520] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.451822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.451855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.456651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.456936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.456971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.461734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.462019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.462055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.466907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.467192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.467228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.471965] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.472250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.472286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.477096] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.477379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.477415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.482178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.482463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.482499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.487279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.487564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.487611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.492383] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.492682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.492717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.497460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.486 [2024-05-15 13:51:42.497773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.486 [2024-05-15 13:51:42.497808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.486 [2024-05-15 13:51:42.502479] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.502779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.502814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.507564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.507861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.507895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.512623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.512905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.512941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.517682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.517965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.518001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.522728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.523038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.523072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.527839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.528188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.528244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.533020] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.533290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.533330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.537884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.538153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.538191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.542810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.543079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.543116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.547714] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.547982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.548020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.552637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.552906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.552940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.557515] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.557809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.557848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.562439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.562723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.562759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.567337] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.567619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.567663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.572243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.572519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.572556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.577190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.577460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.487 [2024-05-15 13:51:42.577498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.487 [2024-05-15 13:51:42.582141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.487 [2024-05-15 13:51:42.582410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.582448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.587068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.587338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.587376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.592033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.592301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.592347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.596984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.597250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.597286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.601878] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.602145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.602182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.606732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.607000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.607036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.611670] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.611941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.611977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.616595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.616877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.616912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.621474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.621758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.621793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.626341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.626621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.626655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.631259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.631527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.631564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.636140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.636420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.636456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.641050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.641315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.641351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.645946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.646211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.748 [2024-05-15 13:51:42.646250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.748 [2024-05-15 13:51:42.650837] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.748 [2024-05-15 13:51:42.651106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.651141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.655689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.655959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.655997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.660620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.660886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.660921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.665455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.665737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.665774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.670322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.670595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.670643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.675186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.675453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.675489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.680095] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.680410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.684956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.685224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.685265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.689854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.690120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.690155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.694701] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.694969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.695007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.699569] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.699851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.699888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.704444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.704729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.704768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.709350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.709628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.709663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.714255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.714521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.714558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.719130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.719398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.719434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.724035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.724301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.724347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.728930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.729198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.729233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.733839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.734109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.734147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.738764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.739032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.739068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.743620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.743888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.743925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.748532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.748810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.748846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.753420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.753699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.753736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.758322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.758588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.758635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.763202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.763469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.763507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.768099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.768375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.768410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.772921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.773184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.773217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.777776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.749 [2024-05-15 13:51:42.778040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.749 [2024-05-15 13:51:42.778073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.749 [2024-05-15 13:51:42.782636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.782902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.782934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.787519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.787801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.787833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.792393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.792679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.792711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.797216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.797486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.797523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.802099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.802366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.802403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.807001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.807269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.807304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.811934] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.812203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.812238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.816835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.817099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.817132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.821734] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.822012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.822050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.826624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.826906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.826940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.831562] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.831843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.831886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.836469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.836759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.836791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:29.750 [2024-05-15 13:51:42.841373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:29.750 [2024-05-15 13:51:42.841654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:29.750 [2024-05-15 13:51:42.841692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.846274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.846542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.846576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.851205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.851474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.851508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.856089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.856368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.856401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.861073] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.861337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.861371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.865999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.866265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.866299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.870900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.871166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.871198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.875761] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.876028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.876060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.880692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.880959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.880992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.885560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.885849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.885882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.890457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.010 [2024-05-15 13:51:42.890741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.010 [2024-05-15 13:51:42.890773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.010 [2024-05-15 13:51:42.895343] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.895622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.895653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.900265] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.900546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.900579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.905177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.905446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.905480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.910048] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.910332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.910376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.915013] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.915307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.915340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.919956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.920223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.920256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.924873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.925154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.925187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.929754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.930021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.930054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.934652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.934942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.934975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.939606] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.939924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.939958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.944477] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.944761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.944794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.949414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.949727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.949762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.954388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.954699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.954731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.959456] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.959745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.959778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.964455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.964747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.964780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.969405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.969727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.969757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.974388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.974692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.974722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.979313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.979610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.979655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.984395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.984682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.984715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.989380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.989689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.989721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.994315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.994622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.994666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:42.999330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:42.999595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:42.999642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:43.004277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:43.004570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:43.004618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:43.009228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:43.009517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:43.009550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:43.014233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:43.014512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:43.014545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:43.019194] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:43.019490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:43.019523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:43.024261] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:43.024571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.011 [2024-05-15 13:51:43.024617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.011 [2024-05-15 13:51:43.029254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.011 [2024-05-15 13:51:43.029552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.029584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.034242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.034508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.034540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.039149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.039443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.039477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.044154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.044455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.044489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.049182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.049486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.049519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.054179] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.054463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.054497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.059187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.059481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.059515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.064178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.064524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.064557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.069395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.069717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.069747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.074428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.074746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.074783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.079426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.079705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.079737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.084460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.084745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.084778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.089447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.089740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.089775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.094487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.094796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.094832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.099787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.100053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.100079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.012 [2024-05-15 13:51:43.104768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.012 [2024-05-15 13:51:43.105037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.012 [2024-05-15 13:51:43.105073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.109839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.110107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.110141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.114825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.115092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.115128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.119721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.120080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.120143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.124676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.124998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.125054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.129305] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.129553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.129586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.133735] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.133988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.134018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.138057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.138297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.138338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.142496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.142750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.142780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.146956] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.147180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.147216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.151445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.151686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.151717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.155926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.156152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.156186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.160486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.160733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.160763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.165124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.165395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.165424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.169820] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.170089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.170134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.174466] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.174735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.174787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.178973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.179198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.179247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.183483] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.183761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.183790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.187914] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.188173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.188233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.273 [2024-05-15 13:51:43.192323] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.273 [2024-05-15 13:51:43.192576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.273 [2024-05-15 13:51:43.192616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.196777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.197031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.197071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.201254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.201519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.201564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.205836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.206113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.206142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.210346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.210586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.210608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.214863] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.215119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.215158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.219330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.219587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.219642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.223784] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.224037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.224096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.228159] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.228417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.228450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.232813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.233083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.233108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.237447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.237700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.237724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.242104] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.242327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.242361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.246851] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.247075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.247097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.251429] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.251699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.251767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.256221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.256492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.256526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.260975] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.261237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.261277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.265737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.266006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.266034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.270406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.270630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.270653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.275057] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.275304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.275350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.279618] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.279904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.279944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.284372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.284600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.284644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.288898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.289137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.289162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.293474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.293759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.293797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.298033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.298275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.298305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.302676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.302897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.302920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.307199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.307412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.307434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.311739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.311952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.311974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.316178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.316402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.274 [2024-05-15 13:51:43.316426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.274 [2024-05-15 13:51:43.320745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.274 [2024-05-15 13:51:43.320955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.320978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.325308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.325520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.325551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.329759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.330081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.330134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.334252] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.334460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.334504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.338790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.338955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.338993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.343372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.343551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.343587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.347957] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.348175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.348201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.352555] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.352764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.352801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.357118] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.357331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.357362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.362035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.362200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.362240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.275 [2024-05-15 13:51:43.366521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.275 [2024-05-15 13:51:43.366709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.275 [2024-05-15 13:51:43.366743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.546 [2024-05-15 13:51:43.371165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.546 [2024-05-15 13:51:43.371381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.546 [2024-05-15 13:51:43.371406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.546 [2024-05-15 13:51:43.375925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.546 [2024-05-15 13:51:43.376117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.546 [2024-05-15 13:51:43.376147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.547 [2024-05-15 13:51:43.380635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.547 [2024-05-15 13:51:43.380827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.547 [2024-05-15 13:51:43.380873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:30.547 [2024-05-15 13:51:43.385243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.547 [2024-05-15 13:51:43.385453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.547 [2024-05-15 13:51:43.385489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:30.547 [2024-05-15 13:51:43.389976] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.547 [2024-05-15 13:51:43.390188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.547 [2024-05-15 13:51:43.390222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:30.547 [2024-05-15 13:51:43.394595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x160e410) with pdu=0x2000190fef90 00:33:30.547 [2024-05-15 13:51:43.394813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:30.547 [2024-05-15 13:51:43.394845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:30.547 00:33:30.547 Latency(us) 00:33:30.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.547 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:30.547 nvme0n1 : 2.00 6216.50 777.06 0.00 0.00 2567.47 1861.82 6523.81 00:33:30.547 =================================================================================================================== 00:33:30.547 Total : 6216.50 777.06 0.00 0.00 2567.47 1861.82 6523.81 00:33:30.547 0 00:33:30.547 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:30.547 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:30.547 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:30.547 | .driver_specific 00:33:30.547 | .nvme_error 00:33:30.547 | .status_code 00:33:30.547 | .command_transient_transport_error' 00:33:30.547 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 401 > 0 )) 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 113069 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 113069 ']' 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 113069 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113069 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:30.810 killing process with pid 113069 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113069' 00:33:30.810 Received shutdown signal, test time was about 2.000000 seconds 00:33:30.810 00:33:30.810 Latency(us) 00:33:30.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.810 =================================================================================================================== 00:33:30.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 113069 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 113069 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 112764 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 112764 ']' 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 112764 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:30.810 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 112764 00:33:31.069 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:31.069 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:31.069 killing process with pid 112764 00:33:31.069 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 112764' 00:33:31.069 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 112764 00:33:31.069 [2024-05-15 13:51:43.926086] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:31.069 13:51:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 112764 00:33:31.069 00:33:31.069 real 0m18.641s 00:33:31.069 user 0m35.659s 00:33:31.069 sys 0m4.783s 00:33:31.069 13:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:31.069 13:51:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.069 ************************************ 00:33:31.069 END TEST nvmf_digest_error 00:33:31.069 ************************************ 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:31.328 rmmod nvme_tcp 00:33:31.328 rmmod nvme_fabrics 00:33:31.328 rmmod nvme_keyring 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:31.328 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 112764 ']' 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 112764 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 112764 ']' 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 112764 00:33:31.329 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (112764) - No such process 00:33:31.329 Process with pid 112764 is not found 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 112764 is not found' 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:31.329 00:33:31.329 real 0m37.759s 00:33:31.329 user 1m11.702s 00:33:31.329 sys 0m9.788s 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:31.329 ************************************ 00:33:31.329 END TEST nvmf_digest 00:33:31.329 13:51:44 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:31.329 ************************************ 00:33:31.329 13:51:44 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:33:31.329 13:51:44 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:33:31.329 13:51:44 nvmf_tcp -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:33:31.329 13:51:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:31.329 13:51:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:31.329 13:51:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:31.329 ************************************ 00:33:31.329 START TEST nvmf_mdns_discovery 00:33:31.329 ************************************ 00:33:31.329 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:33:31.588 * Looking for test storage... 00:33:31.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.588 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:31.589 Cannot find device "nvmf_tgt_br" 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:31.589 Cannot find device "nvmf_tgt_br2" 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:31.589 Cannot find device "nvmf_tgt_br" 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:31.589 Cannot find device "nvmf_tgt_br2" 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:31.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:31.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:31.589 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:31.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:33:31.848 00:33:31.848 --- 10.0.0.2 ping statistics --- 00:33:31.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.848 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:31.848 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:31.848 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:33:31.848 00:33:31.848 --- 10.0.0.3 ping statistics --- 00:33:31.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.848 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:31.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:33:31.848 00:33:31.848 --- 10.0.0.1 ping statistics --- 00:33:31.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.848 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=113360 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 113360 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 113360 ']' 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:31.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:31.848 13:51:44 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:31.848 [2024-05-15 13:51:44.908154] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:31.848 [2024-05-15 13:51:44.908287] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.107 [2024-05-15 13:51:45.034774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:32.107 [2024-05-15 13:51:45.051567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.107 [2024-05-15 13:51:45.143992] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.107 [2024-05-15 13:51:45.144057] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.107 [2024-05-15 13:51:45.144069] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.107 [2024-05-15 13:51:45.144077] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.107 [2024-05-15 13:51:45.144084] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.107 [2024-05-15 13:51:45.144109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:45 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 [2024-05-15 13:51:46.059236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 [2024-05-15 13:51:46.067132] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:33.043 [2024-05-15 13:51:46.067362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 null0 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 null1 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 null2 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 null3 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=113417 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 113417 /tmp/host.sock 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 113417 ']' 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:33.043 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:33.043 13:51:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:33.302 [2024-05-15 13:51:46.163358] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:33.302 [2024-05-15 13:51:46.163434] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113417 ] 00:33:33.302 [2024-05-15 13:51:46.282974] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:33.302 [2024-05-15 13:51:46.299090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:33.302 [2024-05-15 13:51:46.400004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=113446 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:33:34.237 13:51:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:33:34.237 Process 1002 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:33:34.237 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:33:34.237 Successfully dropped root privileges. 00:33:34.237 avahi-daemon 0.8 starting up. 00:33:34.237 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:33:34.237 Successfully called chroot(). 00:33:34.237 Successfully dropped remaining capabilities. 00:33:34.237 No service file found in /etc/avahi/services. 00:33:35.174 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:33:35.174 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:33:35.174 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:33:35.174 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:33:35.174 Network interface enumeration completed. 00:33:35.174 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:33:35.174 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:33:35.174 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:33:35.174 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:33:35.174 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 2642375298. 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:35.174 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:35.432 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:35.700 [2024-05-15 13:51:48.559853] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.700 [2024-05-15 13:51:48.608035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.700 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.701 [2024-05-15 13:51:48.648052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.701 [2024-05-15 13:51:48.656011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:35.701 13:51:48 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:33:36.659 [2024-05-15 13:51:49.459870] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:33:37.226 [2024-05-15 13:51:50.059902] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:33:37.226 [2024-05-15 13:51:50.059955] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:33:37.226 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:37.226 cookie is 0 00:33:37.226 is_local: 1 00:33:37.226 our_own: 0 00:33:37.226 wide_area: 0 00:33:37.226 multicast: 1 00:33:37.226 cached: 1 00:33:37.226 [2024-05-15 13:51:50.159908] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:33:37.226 [2024-05-15 13:51:50.159976] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:33:37.226 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:37.226 cookie is 0 00:33:37.226 is_local: 1 00:33:37.226 our_own: 0 00:33:37.226 wide_area: 0 00:33:37.226 multicast: 1 00:33:37.226 cached: 1 00:33:37.226 [2024-05-15 13:51:50.160007] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:33:37.226 [2024-05-15 13:51:50.259880] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:33:37.226 [2024-05-15 13:51:50.259946] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:33:37.226 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:37.226 cookie is 0 00:33:37.226 is_local: 1 00:33:37.226 our_own: 0 00:33:37.226 wide_area: 0 00:33:37.226 multicast: 1 00:33:37.226 cached: 1 00:33:37.484 [2024-05-15 13:51:50.359888] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:33:37.484 [2024-05-15 13:51:50.359934] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:33:37.484 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:37.484 cookie is 0 00:33:37.484 is_local: 1 00:33:37.484 our_own: 0 00:33:37.484 wide_area: 0 00:33:37.484 multicast: 1 00:33:37.484 cached: 1 00:33:37.484 [2024-05-15 13:51:50.359964] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:33:38.050 [2024-05-15 13:51:51.063876] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:33:38.050 [2024-05-15 13:51:51.063928] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:33:38.050 [2024-05-15 13:51:51.063947] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:38.308 [2024-05-15 13:51:51.150031] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:33:38.308 [2024-05-15 13:51:51.206261] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:33:38.308 [2024-05-15 13:51:51.206293] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:33:38.308 [2024-05-15 13:51:51.263876] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:38.308 [2024-05-15 13:51:51.263918] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:38.308 [2024-05-15 13:51:51.263968] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:38.308 [2024-05-15 13:51:51.350039] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:33:38.308 [2024-05-15 13:51:51.405535] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:33:38.308 [2024-05-15 13:51:51.405588] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:33:40.836 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:40.837 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.095 13:51:53 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:41.096 13:51:54 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:42.059 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.366 [2024-05-15 13:51:55.199035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:42.366 [2024-05-15 13:51:55.199997] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:42.366 [2024-05-15 13:51:55.200038] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:42.366 [2024-05-15 13:51:55.200078] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:42.366 [2024-05-15 13:51:55.200092] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:42.366 [2024-05-15 13:51:55.206987] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:33:42.366 [2024-05-15 13:51:55.207997] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:42.366 [2024-05-15 13:51:55.208059] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.366 13:51:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:33:42.367 [2024-05-15 13:51:55.339128] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:33:42.367 [2024-05-15 13:51:55.339406] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:33:42.367 [2024-05-15 13:51:55.396614] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:33:42.367 [2024-05-15 13:51:55.396656] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:42.367 [2024-05-15 13:51:55.396663] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:42.367 [2024-05-15 13:51:55.396683] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:42.367 [2024-05-15 13:51:55.397499] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:33:42.367 [2024-05-15 13:51:55.397535] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:33:42.367 [2024-05-15 13:51:55.397558] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:42.367 [2024-05-15 13:51:55.397575] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:42.367 [2024-05-15 13:51:55.442228] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:42.367 [2024-05-15 13:51:55.442257] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:42.367 [2024-05-15 13:51:55.443219] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:33:42.367 [2024-05-15 13:51:55.443236] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:43.299 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.559 [2024-05-15 13:51:56.523913] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:43.559 [2024-05-15 13:51:56.524109] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:43.559 [2024-05-15 13:51:56.524306] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:43.559 [2024-05-15 13:51:56.524332] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:43.559 [2024-05-15 13:51:56.525106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.525143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.525158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.525169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.525179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.525188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.525199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.525208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.525218] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:43.559 [2024-05-15 13:51:56.531914] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:33:43.559 [2024-05-15 13:51:56.532107] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:43.559 [2024-05-15 13:51:56.535066] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.559 13:51:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:33:43.559 [2024-05-15 13:51:56.537941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.538115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.538278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.538399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.538468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.538593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.538770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:43.559 [2024-05-15 13:51:56.538897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:43.559 [2024-05-15 13:51:56.539028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.559 [2024-05-15 13:51:56.545088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.559 [2024-05-15 13:51:56.545371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.545547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.545573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.559 [2024-05-15 13:51:56.545586] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.559 [2024-05-15 13:51:56.545655] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.559 [2024-05-15 13:51:56.545676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.559 [2024-05-15 13:51:56.545687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.559 [2024-05-15 13:51:56.545698] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.559 [2024-05-15 13:51:56.545716] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.559 [2024-05-15 13:51:56.547900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.559 [2024-05-15 13:51:56.555299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.559 [2024-05-15 13:51:56.555571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.555764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.555936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.559 [2024-05-15 13:51:56.556087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.559 [2024-05-15 13:51:56.556302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.559 [2024-05-15 13:51:56.556495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.559 [2024-05-15 13:51:56.556647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.559 [2024-05-15 13:51:56.556667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.559 [2024-05-15 13:51:56.556722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.559 [2024-05-15 13:51:56.557909] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.559 [2024-05-15 13:51:56.558000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.558050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.558067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.559 [2024-05-15 13:51:56.558077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.559 [2024-05-15 13:51:56.558094] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.559 [2024-05-15 13:51:56.558109] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.559 [2024-05-15 13:51:56.558118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.559 [2024-05-15 13:51:56.558134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.559 [2024-05-15 13:51:56.558149] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.559 [2024-05-15 13:51:56.565506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.559 [2024-05-15 13:51:56.565774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.566005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.559 [2024-05-15 13:51:56.566167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.559 [2024-05-15 13:51:56.566314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.559 [2024-05-15 13:51:56.566470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.559 [2024-05-15 13:51:56.566678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.559 [2024-05-15 13:51:56.566884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.567028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.560 [2024-05-15 13:51:56.567140] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.567961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.560 [2024-05-15 13:51:56.568198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.568425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.568576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.560 [2024-05-15 13:51:56.568753] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.568939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.569151] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.569167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.569177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.560 [2024-05-15 13:51:56.569215] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.575731] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.560 [2024-05-15 13:51:56.575976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.576232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.576384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.560 [2024-05-15 13:51:56.576533] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.576698] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.577017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.577163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.577302] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.560 [2024-05-15 13:51:56.577439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.578157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.560 [2024-05-15 13:51:56.578392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.578685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.578711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.560 [2024-05-15 13:51:56.578723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.578744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.578815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.578830] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.578839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.560 [2024-05-15 13:51:56.578856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.585930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.560 [2024-05-15 13:51:56.586024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.586074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.586091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.560 [2024-05-15 13:51:56.586101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.586119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.586133] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.586142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.586151] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.560 [2024-05-15 13:51:56.586167] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.588358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.560 [2024-05-15 13:51:56.588443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.588490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.588506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.560 [2024-05-15 13:51:56.588517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.588532] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.588547] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.588556] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.588565] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.560 [2024-05-15 13:51:56.588580] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.595991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.560 [2024-05-15 13:51:56.596084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.596132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.596148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.560 [2024-05-15 13:51:56.596158] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.596175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.596190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.596199] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.596208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.560 [2024-05-15 13:51:56.596223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.598413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.560 [2024-05-15 13:51:56.598503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.598560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.598576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.560 [2024-05-15 13:51:56.598587] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.598621] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.598639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.598648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.598657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.560 [2024-05-15 13:51:56.598672] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.606047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.560 [2024-05-15 13:51:56.606131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.606179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.606204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.560 [2024-05-15 13:51:56.606214] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.606230] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.606244] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.606253] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.606262] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.560 [2024-05-15 13:51:56.606277] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.608468] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.560 [2024-05-15 13:51:56.608561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.608625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.608645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.560 [2024-05-15 13:51:56.608655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.560 [2024-05-15 13:51:56.608672] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.560 [2024-05-15 13:51:56.608686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.560 [2024-05-15 13:51:56.608695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.560 [2024-05-15 13:51:56.608704] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.560 [2024-05-15 13:51:56.608718] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.560 [2024-05-15 13:51:56.616100] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.560 [2024-05-15 13:51:56.616185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.616231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.560 [2024-05-15 13:51:56.616247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.560 [2024-05-15 13:51:56.616257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.616273] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.616288] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.616297] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.616306] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.561 [2024-05-15 13:51:56.616321] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.561 [2024-05-15 13:51:56.618530] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.561 [2024-05-15 13:51:56.618628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.618678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.618695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.561 [2024-05-15 13:51:56.618706] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.618723] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.618737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.618746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.618754] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.561 [2024-05-15 13:51:56.618769] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.561 [2024-05-15 13:51:56.626156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.561 [2024-05-15 13:51:56.626251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.626300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.626316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.561 [2024-05-15 13:51:56.626327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.626344] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.626358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.626368] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.626377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.561 [2024-05-15 13:51:56.626392] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.561 [2024-05-15 13:51:56.628582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.561 [2024-05-15 13:51:56.628678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.628725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.628741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.561 [2024-05-15 13:51:56.628752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.628768] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.628782] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.628792] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.628801] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.561 [2024-05-15 13:51:56.628815] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.561 [2024-05-15 13:51:56.636216] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.561 [2024-05-15 13:51:56.636302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.636362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.636379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.561 [2024-05-15 13:51:56.636390] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.636406] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.636420] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.636429] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.636438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.561 [2024-05-15 13:51:56.636453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.561 [2024-05-15 13:51:56.638648] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.561 [2024-05-15 13:51:56.638741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.638788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.638804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.561 [2024-05-15 13:51:56.638814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.638831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.638864] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.638875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.638885] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.561 [2024-05-15 13:51:56.638900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.561 [2024-05-15 13:51:56.646272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.561 [2024-05-15 13:51:56.646355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.646400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.646416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.561 [2024-05-15 13:51:56.646426] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.646443] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.646458] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.646466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.646475] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.561 [2024-05-15 13:51:56.646490] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.561 [2024-05-15 13:51:56.648704] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.561 [2024-05-15 13:51:56.648783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.648829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.561 [2024-05-15 13:51:56.648844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.561 [2024-05-15 13:51:56.648854] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.561 [2024-05-15 13:51:56.648870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.561 [2024-05-15 13:51:56.648903] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.561 [2024-05-15 13:51:56.648913] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.561 [2024-05-15 13:51:56.648923] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.561 [2024-05-15 13:51:56.648937] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.821 [2024-05-15 13:51:56.656327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:33:43.821 [2024-05-15 13:51:56.656442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.821 [2024-05-15 13:51:56.656490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.821 [2024-05-15 13:51:56.656506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d6870 with addr=10.0.0.2, port=4420 00:33:43.821 [2024-05-15 13:51:56.656517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d6870 is same with the state(5) to be set 00:33:43.821 [2024-05-15 13:51:56.656534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d6870 (9): Bad file descriptor 00:33:43.821 [2024-05-15 13:51:56.656549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:43.821 [2024-05-15 13:51:56.656565] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:33:43.821 [2024-05-15 13:51:56.656575] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:43.821 [2024-05-15 13:51:56.656590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.821 [2024-05-15 13:51:56.658753] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:33:43.821 [2024-05-15 13:51:56.658831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.821 [2024-05-15 13:51:56.658878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:43.821 [2024-05-15 13:51:56.658894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11d52b0 with addr=10.0.0.3, port=4420 00:33:43.821 [2024-05-15 13:51:56.658903] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11d52b0 is same with the state(5) to be set 00:33:43.821 [2024-05-15 13:51:56.658920] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d52b0 (9): Bad file descriptor 00:33:43.821 [2024-05-15 13:51:56.658953] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:33:43.821 [2024-05-15 13:51:56.658963] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:33:43.821 [2024-05-15 13:51:56.658973] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:33:43.821 [2024-05-15 13:51:56.658987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:43.821 [2024-05-15 13:51:56.662962] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:33:43.821 [2024-05-15 13:51:56.662996] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:43.821 [2024-05-15 13:51:56.663035] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:43.821 [2024-05-15 13:51:56.663072] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:43.821 [2024-05-15 13:51:56.663089] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:43.821 [2024-05-15 13:51:56.663104] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:43.821 [2024-05-15 13:51:56.749056] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:43.821 [2024-05-15 13:51:56.749148] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.760 13:51:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:33:45.019 [2024-05-15 13:51:57.859913] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:45.955 13:51:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:45.955 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:33:45.955 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:33:45.955 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:33:45.955 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:33:45.955 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:45.955 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:33:46.213 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:46.214 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:46.214 [2024-05-15 13:51:59.110040] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:33:46.214 2024/05/15 13:51:59 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:33:46.214 request: 00:33:46.214 { 00:33:46.214 "method": "bdev_nvme_start_mdns_discovery", 00:33:46.214 "params": { 00:33:46.214 "name": "mdns", 00:33:46.214 "svcname": "_nvme-disc._http", 00:33:46.214 "hostnqn": "nqn.2021-12.io.spdk:test" 00:33:46.214 } 00:33:46.214 } 00:33:46.214 Got JSON-RPC error response 00:33:46.214 GoRPCClient: error on JSON-RPC call 00:33:46.214 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:46.214 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:46.214 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:46.214 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:46.214 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:46.214 13:51:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:33:46.780 [2024-05-15 13:51:59.698703] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:33:46.780 [2024-05-15 13:51:59.798697] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:33:47.040 [2024-05-15 13:51:59.898703] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:33:47.040 [2024-05-15 13:51:59.898761] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:33:47.040 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:47.040 cookie is 0 00:33:47.040 is_local: 1 00:33:47.040 our_own: 0 00:33:47.040 wide_area: 0 00:33:47.040 multicast: 1 00:33:47.040 cached: 1 00:33:47.040 [2024-05-15 13:51:59.998703] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:33:47.040 [2024-05-15 13:51:59.998746] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:33:47.040 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:47.040 cookie is 0 00:33:47.040 is_local: 1 00:33:47.040 our_own: 0 00:33:47.040 wide_area: 0 00:33:47.040 multicast: 1 00:33:47.040 cached: 1 00:33:47.040 [2024-05-15 13:51:59.998778] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:33:47.040 [2024-05-15 13:52:00.098722] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:33:47.040 [2024-05-15 13:52:00.098793] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:33:47.040 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" "nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:47.040 cookie is 0 00:33:47.040 is_local: 1 00:33:47.040 our_own: 0 00:33:47.040 wide_area: 0 00:33:47.040 multicast: 1 00:33:47.040 cached: 1 00:33:47.299 [2024-05-15 13:52:00.198705] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:33:47.299 [2024-05-15 13:52:00.198744] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:33:47.299 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:33:47.299 cookie is 0 00:33:47.299 is_local: 1 00:33:47.299 our_own: 0 00:33:47.299 wide_area: 0 00:33:47.299 multicast: 1 00:33:47.299 cached: 1 00:33:47.299 [2024-05-15 13:52:00.198759] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:33:47.866 [2024-05-15 13:52:00.905831] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:33:47.866 [2024-05-15 13:52:00.905877] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:33:47.866 [2024-05-15 13:52:00.905896] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:33:48.124 [2024-05-15 13:52:00.992004] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:33:48.124 [2024-05-15 13:52:01.051549] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:33:48.124 [2024-05-15 13:52:01.051595] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:33:48.124 [2024-05-15 13:52:01.105681] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:48.124 [2024-05-15 13:52:01.105708] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:48.124 [2024-05-15 13:52:01.105726] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:48.124 [2024-05-15 13:52:01.192829] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:33:48.382 [2024-05-15 13:52:01.252323] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:33:48.382 [2024-05-15 13:52:01.252396] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.665 [2024-05-15 13:52:04.305829] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:33:51.665 2024/05/15 13:52:04 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:33:51.665 request: 00:33:51.665 { 00:33:51.665 "method": "bdev_nvme_start_mdns_discovery", 00:33:51.665 "params": { 00:33:51.665 "name": "cdc", 00:33:51.665 "svcname": "_nvme-disc._tcp", 00:33:51.665 "hostnqn": "nqn.2021-12.io.spdk:test" 00:33:51.665 } 00:33:51.665 } 00:33:51.665 Got JSON-RPC error response 00:33:51.665 GoRPCClient: error on JSON-RPC call 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:51.665 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 113417 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 113417 00:33:51.666 [2024-05-15 13:52:04.546178] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 113446 00:33:51.666 Got SIGTERM, quitting. 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:33:51.666 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:33:51.666 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:33:51.666 avahi-daemon 0.8 exiting. 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:51.666 rmmod nvme_tcp 00:33:51.666 rmmod nvme_fabrics 00:33:51.666 rmmod nvme_keyring 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 113360 ']' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 113360 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 113360 ']' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 113360 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113360 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:33:51.666 killing process with pid 113360 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113360' 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 113360 00:33:51.666 [2024-05-15 13:52:04.760365] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:51.666 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 113360 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:51.926 13:52:04 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.926 13:52:05 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:52.186 00:33:52.186 real 0m20.656s 00:33:52.186 user 0m40.531s 00:33:52.186 sys 0m1.995s 00:33:52.186 13:52:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:52.186 ************************************ 00:33:52.186 END TEST nvmf_mdns_discovery 00:33:52.186 ************************************ 00:33:52.186 13:52:05 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:52.186 13:52:05 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:33:52.186 13:52:05 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:33:52.186 13:52:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:33:52.186 13:52:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:52.186 13:52:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:52.186 ************************************ 00:33:52.186 START TEST nvmf_host_multipath 00:33:52.186 ************************************ 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:33:52.186 * Looking for test storage... 00:33:52.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:33:52.186 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:33:52.187 Cannot find device "nvmf_tgt_br" 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:33:52.187 Cannot find device "nvmf_tgt_br2" 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:33:52.187 Cannot find device "nvmf_tgt_br" 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:33:52.187 Cannot find device "nvmf_tgt_br2" 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:33:52.187 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:52.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:52.446 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:33:52.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:52.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:33:52.446 00:33:52.446 --- 10.0.0.2 ping statistics --- 00:33:52.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.446 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:33:52.446 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:52.446 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:33:52.446 00:33:52.446 --- 10.0.0.3 ping statistics --- 00:33:52.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.446 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:52.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:52.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:33:52.446 00:33:52.446 --- 10.0.0.1 ping statistics --- 00:33:52.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:52.446 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=113998 00:33:52.446 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 113998 00:33:52.447 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 113998 ']' 00:33:52.447 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:52.447 13:52:05 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:52.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:52.447 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:52.447 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:52.447 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:52.447 13:52:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:52.705 [2024-05-15 13:52:05.578939] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:33:52.705 [2024-05-15 13:52:05.579030] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:52.705 [2024-05-15 13:52:05.703599] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:52.705 [2024-05-15 13:52:05.715668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:52.964 [2024-05-15 13:52:05.820203] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:52.964 [2024-05-15 13:52:05.820299] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:52.964 [2024-05-15 13:52:05.820311] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:52.964 [2024-05-15 13:52:05.820320] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:52.964 [2024-05-15 13:52:05.820327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:52.964 [2024-05-15 13:52:05.820457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:52.964 [2024-05-15 13:52:05.820465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=113998 00:33:53.531 13:52:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:53.790 [2024-05-15 13:52:06.863770] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.790 13:52:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:54.048 Malloc0 00:33:54.307 13:52:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:54.307 13:52:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:54.566 13:52:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.824 [2024-05-15 13:52:07.893883] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:33:54.824 [2024-05-15 13:52:07.894227] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.824 13:52:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:55.392 [2024-05-15 13:52:08.186319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=114096 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 114096 /var/tmp/bdevperf.sock 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 114096 ']' 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:55.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:55.392 13:52:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:56.338 13:52:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:56.338 13:52:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:33:56.338 13:52:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:56.597 13:52:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:33:56.855 Nvme0n1 00:33:56.855 13:52:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:57.421 Nvme0n1 00:33:57.421 13:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:57.421 13:52:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:33:58.356 13:52:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:33:58.356 13:52:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:58.625 13:52:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:58.884 13:52:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:33:58.884 13:52:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:33:58.884 13:52:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114189 00:33:58.884 13:52:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:05.445 13:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:05.445 13:52:17 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:05.446 Attaching 4 probes... 00:34:05.446 @path[10.0.0.2, 4421]: 17488 00:34:05.446 @path[10.0.0.2, 4421]: 17920 00:34:05.446 @path[10.0.0.2, 4421]: 17643 00:34:05.446 @path[10.0.0.2, 4421]: 17682 00:34:05.446 @path[10.0.0.2, 4421]: 17916 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114189 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:05.446 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:05.705 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:34:05.705 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114314 00:34:05.705 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:34:05.705 13:52:18 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:12.330 Attaching 4 probes... 00:34:12.330 @path[10.0.0.2, 4420]: 17001 00:34:12.330 @path[10.0.0.2, 4420]: 17128 00:34:12.330 @path[10.0.0.2, 4420]: 17238 00:34:12.330 @path[10.0.0.2, 4420]: 17299 00:34:12.330 @path[10.0.0.2, 4420]: 17473 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114314 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:34:12.330 13:52:24 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:12.330 13:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:12.587 13:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:34:12.587 13:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114446 00:34:12.588 13:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:34:12.588 13:52:25 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:19.197 Attaching 4 probes... 00:34:19.197 @path[10.0.0.2, 4421]: 12142 00:34:19.197 @path[10.0.0.2, 4421]: 17482 00:34:19.197 @path[10.0.0.2, 4421]: 17462 00:34:19.197 @path[10.0.0.2, 4421]: 17389 00:34:19.197 @path[10.0.0.2, 4421]: 17417 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114446 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:34:19.197 13:52:31 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:34:19.197 13:52:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:19.455 13:52:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:34:19.455 13:52:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114580 00:34:19.455 13:52:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:34:19.455 13:52:32 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:26.010 Attaching 4 probes... 00:34:26.010 00:34:26.010 00:34:26.010 00:34:26.010 00:34:26.010 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114580 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:26.010 13:52:38 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:26.268 13:52:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:34:26.268 13:52:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114711 00:34:26.269 13:52:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:34:26.269 13:52:39 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:32.829 Attaching 4 probes... 00:34:32.829 @path[10.0.0.2, 4421]: 17039 00:34:32.829 @path[10.0.0.2, 4421]: 17180 00:34:32.829 @path[10.0.0.2, 4421]: 17331 00:34:32.829 @path[10.0.0.2, 4421]: 17285 00:34:32.829 @path[10.0.0.2, 4421]: 17258 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114711 00:34:32.829 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:32.830 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:32.830 [2024-05-15 13:52:45.663865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663936] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663966] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.663999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664007] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664033] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664087] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664180] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664192] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664242] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664255] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664426] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664434] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664496] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664508] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664557] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664613] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664671] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664688] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 [2024-05-15 13:52:45.664696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199d630 is same with the state(5) to be set 00:34:32.830 13:52:45 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:34:33.802 13:52:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:34:33.802 13:52:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=114837 00:34:33.802 13:52:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:34:33.802 13:52:46 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:40.375 13:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:40.375 13:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:34:40.375 13:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:34:40.375 13:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:40.375 Attaching 4 probes... 00:34:40.375 @path[10.0.0.2, 4420]: 16491 00:34:40.375 @path[10.0.0.2, 4420]: 17185 00:34:40.375 @path[10.0.0.2, 4420]: 16926 00:34:40.375 @path[10.0.0.2, 4420]: 16839 00:34:40.375 @path[10.0.0.2, 4420]: 16899 00:34:40.375 13:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:40.375 13:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:40.375 13:52:52 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:40.375 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:34:40.375 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:34:40.375 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:34:40.375 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 114837 00:34:40.375 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:40.375 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:40.375 [2024-05-15 13:52:53.222254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:40.375 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:34:40.632 13:52:53 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:34:47.186 13:52:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:34:47.186 13:52:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=115024 00:34:47.186 13:52:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 113998 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:34:47.186 13:52:59 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:34:52.543 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:34:52.543 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:52.801 Attaching 4 probes... 00:34:52.801 @path[10.0.0.2, 4421]: 16778 00:34:52.801 @path[10.0.0.2, 4421]: 16854 00:34:52.801 @path[10.0.0.2, 4421]: 16929 00:34:52.801 @path[10.0.0.2, 4421]: 16950 00:34:52.801 @path[10.0.0.2, 4421]: 16938 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 115024 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 114096 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 114096 ']' 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 114096 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 114096 00:34:52.801 killing process with pid 114096 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 114096' 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 114096 00:34:52.801 13:53:05 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 114096 00:34:53.059 Connection closed with partial response: 00:34:53.059 00:34:53.059 00:34:53.347 13:53:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 114096 00:34:53.347 13:53:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:53.347 [2024-05-15 13:52:08.262970] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:53.347 [2024-05-15 13:52:08.263084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114096 ] 00:34:53.347 [2024-05-15 13:52:08.386311] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:53.347 [2024-05-15 13:52:08.406657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:53.347 [2024-05-15 13:52:08.505411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:53.347 Running I/O for 90 seconds... 00:34:53.347 [2024-05-15 13:52:18.616215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.347 [2024-05-15 13:52:18.616299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.348 [2024-05-15 13:52:18.616391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.348 [2024-05-15 13:52:18.616432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.348 [2024-05-15 13:52:18.616469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.348 [2024-05-15 13:52:18.616506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.348 [2024-05-15 13:52:18.616543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.348 [2024-05-15 13:52:18.616580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.616971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.616993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.348 [2024-05-15 13:52:18.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.617659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.617675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.618095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.618131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.618159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.618177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.618198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.618215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.618237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.618254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.618275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.618291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.348 [2024-05-15 13:52:18.618313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.348 [2024-05-15 13:52:18.618329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.618948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.618970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.618986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.349 [2024-05-15 13:52:18.619727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.619772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.619810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.619847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.619884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.619928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.619965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.619987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.620003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.620024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.620051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.620075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.620091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.620113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.620129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.620150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.620167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.349 [2024-05-15 13:52:18.620188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.349 [2024-05-15 13:52:18.620204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.620967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.620988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.350 [2024-05-15 13:52:18.621359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.350 [2024-05-15 13:52:18.621380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:67008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:67032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:67072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:67088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.621978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.621996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.622018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.622034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.622055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.622071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.622092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:67120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.622108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.622129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.622146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.622167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:67136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.622182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.622203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:67144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.622220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:18.622241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.351 [2024-05-15 13:52:18.622258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:25.244499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.351 [2024-05-15 13:52:25.244575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:25.244625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.351 [2024-05-15 13:52:25.244647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:25.244670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.351 [2024-05-15 13:52:25.244686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:25.244707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.351 [2024-05-15 13:52:25.244723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.351 [2024-05-15 13:52:25.244745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.244785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.244808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.244824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.244845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.244870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.244907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.245725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.245752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.245780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.245798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.245820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.245835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.245856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.245872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.245893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.245918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.245939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.245955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.245976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.245991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.352 [2024-05-15 13:52:25.246338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.352 [2024-05-15 13:52:25.246663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.352 [2024-05-15 13:52:25.246685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.353 [2024-05-15 13:52:25.246701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.246738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.246775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.246812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.246851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.246888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.246925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.246963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.246985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.353 [2024-05-15 13:52:25.247957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.353 [2024-05-15 13:52:25.247973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.247995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.248703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.354 [2024-05-15 13:52:25.248740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.354 [2024-05-15 13:52:25.248777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.354 [2024-05-15 13:52:25.248814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.354 [2024-05-15 13:52:25.248860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.354 [2024-05-15 13:52:25.248899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.354 [2024-05-15 13:52:25.248936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.354 [2024-05-15 13:52:25.248973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.248994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.249021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.249043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.354 [2024-05-15 13:52:25.249059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.354 [2024-05-15 13:52:25.249081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.249940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.249956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.250007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.250025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.250052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.250069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.250089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.250106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.250127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.250143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.355 [2024-05-15 13:52:25.250164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.355 [2024-05-15 13:52:25.250180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.250587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.250618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.251970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.251986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.252007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.252023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.252044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.252060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.252081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.252097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.356 [2024-05-15 13:52:25.252119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.356 [2024-05-15 13:52:25.252135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.252781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.252965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.252981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.253017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.253054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.253096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.357 [2024-05-15 13:52:25.253134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.253171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.253207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.357 [2024-05-15 13:52:25.253244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.357 [2024-05-15 13:52:25.253265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.253696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.253717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.254961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.254991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.255028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.255065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.255109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.255146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.255183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.255221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.358 [2024-05-15 13:52:25.255258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.358 [2024-05-15 13:52:25.255274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.359 [2024-05-15 13:52:25.255312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.359 [2024-05-15 13:52:25.255348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.359 [2024-05-15 13:52:25.255386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.359 [2024-05-15 13:52:25.255425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.359 [2024-05-15 13:52:25.255469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.359 [2024-05-15 13:52:25.255508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.359 [2024-05-15 13:52:25.255545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.359 [2024-05-15 13:52:25.255582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.359 [2024-05-15 13:52:25.255635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.359 [2024-05-15 13:52:25.255657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.359 [2024-05-15 13:52:25.255674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.255706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.255727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.255749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.255765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.255787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.255803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.255824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.255840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.266972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.266993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.360 [2024-05-15 13:52:25.267333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.360 [2024-05-15 13:52:25.267350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.267371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.267387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.267408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.267423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.267445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.267460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.267482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.267497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.267518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.267534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.267555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.267571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.267593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.267630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.268713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.268761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.268811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.268848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.268897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.268938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.268975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.268997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.361 [2024-05-15 13:52:25.269725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.361 [2024-05-15 13:52:25.269747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.269763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.269785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.269801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.269823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.269839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.269868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.269885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.269907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.269923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.269946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.269962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.269983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.269999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.270036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.362 [2024-05-15 13:52:25.270379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.270910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.270926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.362 [2024-05-15 13:52:25.271903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.362 [2024-05-15 13:52:25.271919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.271940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.271955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.271987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.363 [2024-05-15 13:52:25.272592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.363 [2024-05-15 13:52:25.272643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.363 [2024-05-15 13:52:25.272680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.363 [2024-05-15 13:52:25.272716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.363 [2024-05-15 13:52:25.272753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.363 [2024-05-15 13:52:25.272790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.363 [2024-05-15 13:52:25.272827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.272961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.272984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.363 [2024-05-15 13:52:25.273453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.363 [2024-05-15 13:52:25.273475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.273976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.273991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.274304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.274319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.275174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.275211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.275239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.275257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.275278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.275294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.364 [2024-05-15 13:52:25.275315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.364 [2024-05-15 13:52:25.275331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.275964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.275979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.365 [2024-05-15 13:52:25.276516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.365 [2024-05-15 13:52:25.276838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.365 [2024-05-15 13:52:25.276859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.366 [2024-05-15 13:52:25.276875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.276896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.276917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.276938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.276954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.276975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.276991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.277357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.277372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.366 [2024-05-15 13:52:25.278913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.366 [2024-05-15 13:52:25.278928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.278949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.278965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.278985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.279001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.279022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.367 [2024-05-15 13:52:25.279038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.279059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.367 [2024-05-15 13:52:25.279075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.279096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.367 [2024-05-15 13:52:25.279111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.279132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.367 [2024-05-15 13:52:25.279148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.279169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.367 [2024-05-15 13:52:25.279187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.279212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.367 [2024-05-15 13:52:25.279229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.367 [2024-05-15 13:52:25.287272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.287971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.287986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.288007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.288022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.288043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.288059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.288080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.367 [2024-05-15 13:52:25.288095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.367 [2024-05-15 13:52:25.288116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.288769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.288785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.289749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.289779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.289808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.289828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.289851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.289867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.289888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.289903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.289925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.289940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.289962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.289977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.289998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.290014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.290036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.290051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.290072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.290088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.290124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.290141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.290163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.290178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.290199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.290214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.290236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.368 [2024-05-15 13:52:25.290252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.368 [2024-05-15 13:52:25.290273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.290829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.290867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.290905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.290942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.290963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.290979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.291016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.291061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.291099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.369 [2024-05-15 13:52:25.291137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.369 [2024-05-15 13:52:25.291416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.369 [2024-05-15 13:52:25.291432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.370 [2024-05-15 13:52:25.291469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.291918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.291934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.292975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.292990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.293011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.370 [2024-05-15 13:52:25.293027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.370 [2024-05-15 13:52:25.293048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.371 [2024-05-15 13:52:25.293685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.371 [2024-05-15 13:52:25.293723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.371 [2024-05-15 13:52:25.293760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.371 [2024-05-15 13:52:25.293797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.371 [2024-05-15 13:52:25.293834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.371 [2024-05-15 13:52:25.293879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.371 [2024-05-15 13:52:25.293916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.293974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.293989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.294011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.294027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.294048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.294064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.294085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.294101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.294129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.294145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.294166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.294182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.294203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.294218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.371 [2024-05-15 13:52:25.294239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.371 [2024-05-15 13:52:25.294255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.294966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.294981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.295304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.295320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.296176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.296203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.296231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.296257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.296279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.296296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.296318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.296333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.372 [2024-05-15 13:52:25.296355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.372 [2024-05-15 13:52:25.296383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.296973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.296989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.373 [2024-05-15 13:52:25.297648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.373 [2024-05-15 13:52:25.297669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.373 [2024-05-15 13:52:25.297685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.297722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.297758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.297795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.297832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.297876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.297913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.297950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.297971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.374 [2024-05-15 13:52:25.297987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.298384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.298400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.374 [2024-05-15 13:52:25.299802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.374 [2024-05-15 13:52:25.299823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.299839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.299860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.299876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.299904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.299928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.299951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.299967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.299988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-05-15 13:52:25.300151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-05-15 13:52:25.300188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-05-15 13:52:25.300225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-05-15 13:52:25.300262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-05-15 13:52:25.300299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-05-15 13:52:25.300335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.375 [2024-05-15 13:52:25.300385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.300962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.300978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.375 [2024-05-15 13:52:25.308954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.375 [2024-05-15 13:52:25.308977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.308994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.309962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.309989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.310981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.310998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.311039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.311081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.311122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.376 [2024-05-15 13:52:25.311454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.376 [2024-05-15 13:52:25.311487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.376 [2024-05-15 13:52:25.311503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:25.311853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.311962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.311978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:25.312447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:25.312470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.287764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.287801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.287839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.287875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.287912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.287949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.287969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.287985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.377 [2024-05-15 13:52:32.288384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.377 [2024-05-15 13:52:32.288719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.377 [2024-05-15 13:52:32.288736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.288758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.288774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.288796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.288812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.288833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.288849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.288871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.288887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.288908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.288924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.288945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.288960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.288982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.288998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.289965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.289987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.290011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.290800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.378 [2024-05-15 13:52:32.290828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.290864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.290905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.290921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.290942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.290958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.290980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.290996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.378 [2024-05-15 13:52:32.291300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.378 [2024-05-15 13:52:32.291318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.291986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.379 [2024-05-15 13:52:32.292471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.379 [2024-05-15 13:52:32.292509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.379 [2024-05-15 13:52:32.292547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.379 [2024-05-15 13:52:32.292585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.379 [2024-05-15 13:52:32.292645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.379 [2024-05-15 13:52:32.292684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.292956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.292972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.293970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.293991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.379 [2024-05-15 13:52:32.294007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.379 [2024-05-15 13:52:32.294029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.380 [2024-05-15 13:52:32.294867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.294972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.294988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.295963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.295984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.296000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.296021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.296037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.380 [2024-05-15 13:52:32.296058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.380 [2024-05-15 13:52:32.296074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.296422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.296438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.297198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.381 [2024-05-15 13:52:32.297243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.297969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.297991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.298267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.298288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.311675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.311753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.311775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.311799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.311815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.311837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.311866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.311894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.311911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.311932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.311948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.311970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.311986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.312024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.312061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.312098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.312135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.312173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.312210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.381 [2024-05-15 13:52:32.312248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.381 [2024-05-15 13:52:32.312269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.312285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.312322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.312368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.312428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.312465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.312508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.312545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.312591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.312665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.312702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.312740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.312762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.312778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.313978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.313993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.382 [2024-05-15 13:52:32.314880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.314977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.382 [2024-05-15 13:52:32.314992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.382 [2024-05-15 13:52:32.315014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.315956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.315985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.316364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.316394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.317347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.317405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.383 [2024-05-15 13:52:32.317468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.317960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.317979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.318007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.318027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.318055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.318075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.318113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.318134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.318162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.318182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.318210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.318230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.383 [2024-05-15 13:52:32.318257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.383 [2024-05-15 13:52:32.318277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.318963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.318983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.319527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.319575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.319638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.319686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.319735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.319782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.319971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.319992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.320020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.320041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.320870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.320903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.320938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.320960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.320989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.384 [2024-05-15 13:52:32.321811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.321858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.321906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.321953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.321989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.322010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.322037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.384 [2024-05-15 13:52:32.322057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.384 [2024-05-15 13:52:32.322084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.385 [2024-05-15 13:52:32.322586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.322969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.322989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.323950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.323980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.324462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.324483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.325530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.325564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.325588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.325636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.325658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.385 [2024-05-15 13:52:32.325686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.385 [2024-05-15 13:52:32.325706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.325734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.325755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.325782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.325802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.325831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.325851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.325878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.325899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.325926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.325946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.325973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.325992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.326960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.326987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.327704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.386 [2024-05-15 13:52:32.327751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.386 [2024-05-15 13:52:32.327798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.386 [2024-05-15 13:52:32.327846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.386 [2024-05-15 13:52:32.327902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.386 [2024-05-15 13:52:32.327959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.327986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.386 [2024-05-15 13:52:32.328011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.328039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.328059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.328086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.328106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.328133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.328153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.328181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.328203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.328903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.328930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.328956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.328974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.328995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.386 [2024-05-15 13:52:32.329011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.386 [2024-05-15 13:52:32.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.329672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.329966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.329987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.387 [2024-05-15 13:52:32.330268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.387 [2024-05-15 13:52:32.330646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.387 [2024-05-15 13:52:32.330662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.330961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.330992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.331684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.331705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.340072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.340133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.340155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.340179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.340195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.341293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.341361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.341412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.341463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.388 [2024-05-15 13:52:32.341521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.388 [2024-05-15 13:52:32.341590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.388 [2024-05-15 13:52:32.341667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.388 [2024-05-15 13:52:32.341717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.388 [2024-05-15 13:52:32.341746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.341767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.341795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.341816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.341845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.341866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.341895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.341916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.341945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.341966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.341995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.342952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.342980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.389 [2024-05-15 13:52:32.343673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.389 [2024-05-15 13:52:32.343722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.389 [2024-05-15 13:52:32.343772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.389 [2024-05-15 13:52:32.343822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.389 [2024-05-15 13:52:32.343871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.389 [2024-05-15 13:52:32.343921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.389 [2024-05-15 13:52:32.343971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.389 [2024-05-15 13:52:32.343999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.344020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.344050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.344071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.344920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.344971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.345953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.345982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.390 [2024-05-15 13:52:32.346003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.346032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.390 [2024-05-15 13:52:32.346053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.346082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.390 [2024-05-15 13:52:32.346102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.346130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.390 [2024-05-15 13:52:32.346151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.390 [2024-05-15 13:52:32.346180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.390 [2024-05-15 13:52:32.346200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.346261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.346310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.346359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.346407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.391 [2024-05-15 13:52:32.346818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.346876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.346926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.346955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.346975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.347977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.347997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.348025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.348046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.348074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.348104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.348134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.391 [2024-05-15 13:52:32.348155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.391 [2024-05-15 13:52:32.348184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.348668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.348690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.349690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.349726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.349778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.349802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.349831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.349852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.349881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.349902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.349931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.349951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.349980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.392 [2024-05-15 13:52:32.350000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.350970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.350999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.351031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.351060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.351081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.351110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.351130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.351159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.351180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.392 [2024-05-15 13:52:32.351209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.392 [2024-05-15 13:52:32.351230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.351955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.351984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.352004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.352053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.393 [2024-05-15 13:52:32.352102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.393 [2024-05-15 13:52:32.352151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.393 [2024-05-15 13:52:32.352201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.393 [2024-05-15 13:52:32.352250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.393 [2024-05-15 13:52:32.352309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.393 [2024-05-15 13:52:32.352358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.352426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.352465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.352481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.393 [2024-05-15 13:52:32.353383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.393 [2024-05-15 13:52:32.353398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.353981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.353997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.394 [2024-05-15 13:52:32.354557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.394 [2024-05-15 13:52:32.354909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.394 [2024-05-15 13:52:32.354930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.354946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.354967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.354983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.355869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.355885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.356662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.356706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.356744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.356781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.356817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.356853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.395 [2024-05-15 13:52:32.356889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.395 [2024-05-15 13:52:32.356932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.356954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.395 [2024-05-15 13:52:32.356980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.357003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.395 [2024-05-15 13:52:32.357019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.357040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.395 [2024-05-15 13:52:32.357056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.357077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.395 [2024-05-15 13:52:32.357093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.357114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.395 [2024-05-15 13:52:32.357130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.395 [2024-05-15 13:52:32.357152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.357968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.357983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.396 [2024-05-15 13:52:32.358432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.396 [2024-05-15 13:52:32.358468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.396 [2024-05-15 13:52:32.358505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.396 [2024-05-15 13:52:32.358541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.396 [2024-05-15 13:52:32.358577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.396 [2024-05-15 13:52:32.358627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.396 [2024-05-15 13:52:32.358665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.396 [2024-05-15 13:52:32.358687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.358702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.359968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.359989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.360004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.360040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.360077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.360113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.360149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.397 [2024-05-15 13:52:32.360465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.360503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.397 [2024-05-15 13:52:32.360549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:53.397 [2024-05-15 13:52:32.360569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.398 [2024-05-15 13:52:32.360585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.398 [2024-05-15 13:52:32.360634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.398 [2024-05-15 13:52:32.360671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.398 [2024-05-15 13:52:32.360708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.398 [2024-05-15 13:52:32.360744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.398 [2024-05-15 13:52:32.360780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.360817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.360864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.360901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.360937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.360973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.360994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.361557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.361573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:53.398 [2024-05-15 13:52:32.371941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.398 [2024-05-15 13:52:32.371957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.399 [2024-05-15 13:52:32.372697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.372739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.372781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.372822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.372864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.372905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.372946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.372972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.372988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:22512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.373983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.373999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:53.399 [2024-05-15 13:52:32.374024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.399 [2024-05-15 13:52:32.374040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.400 [2024-05-15 13:52:32.374462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.400 [2024-05-15 13:52:32.374504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.400 [2024-05-15 13:52:32.374546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.400 [2024-05-15 13:52:32.374587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.400 [2024-05-15 13:52:32.374644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.400 [2024-05-15 13:52:32.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:32.374841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:32.374863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.400 [2024-05-15 13:52:45.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.666973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.666986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.400 [2024-05-15 13:52:45.667204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.400 [2024-05-15 13:52:45.667218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.667934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.667971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.667986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:63456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.401 [2024-05-15 13:52:45.668200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.401 [2024-05-15 13:52:45.668438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.401 [2024-05-15 13:52:45.668451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.668976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.668991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.402 [2024-05-15 13:52:45.669454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.402 [2024-05-15 13:52:45.669476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.403 [2024-05-15 13:52:45.669505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.403 [2024-05-15 13:52:45.669533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.403 [2024-05-15 13:52:45.669561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.403 [2024-05-15 13:52:45.669589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.403 [2024-05-15 13:52:45.669630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:53.403 [2024-05-15 13:52:45.669659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.669706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64232 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.669720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.669748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.669758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64240 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.669771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669784] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.669794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.669804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64248 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.669817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.669839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.669855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64256 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.669868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.669899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.669908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64264 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.669921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.669946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.669956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64272 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.669969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.669982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.669992] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64280 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64288 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64296 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64304 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64312 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64320 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64328 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64336 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64344 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64352 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670462] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.670481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64360 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.670508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.670521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.670531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.681328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64368 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.681368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.681391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.681403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.681420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64376 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.681434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.681447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.681457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.681484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64384 len:8 PRP1 0x0 PRP2 0x0 00:34:53.403 [2024-05-15 13:52:45.681499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.403 [2024-05-15 13:52:45.681513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.403 [2024-05-15 13:52:45.681522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.403 [2024-05-15 13:52:45.681532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64392 len:8 PRP1 0x0 PRP2 0x0 00:34:53.404 [2024-05-15 13:52:45.681545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.681559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.404 [2024-05-15 13:52:45.681569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.404 [2024-05-15 13:52:45.681579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64400 len:8 PRP1 0x0 PRP2 0x0 00:34:53.404 [2024-05-15 13:52:45.681591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.681620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:53.404 [2024-05-15 13:52:45.681633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:53.404 [2024-05-15 13:52:45.681643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64408 len:8 PRP1 0x0 PRP2 0x0 00:34:53.404 [2024-05-15 13:52:45.681656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.681740] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x12ed840 was disconnected and freed. reset controller. 00:34:53.404 [2024-05-15 13:52:45.681892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.404 [2024-05-15 13:52:45.681918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.681934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.404 [2024-05-15 13:52:45.681947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.681961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.404 [2024-05-15 13:52:45.681974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.681988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:53.404 [2024-05-15 13:52:45.682001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.682016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:53.404 [2024-05-15 13:52:45.682030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:53.404 [2024-05-15 13:52:45.682050] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcd10 is same with the state(5) to be set 00:34:53.404 [2024-05-15 13:52:45.683776] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:53.404 [2024-05-15 13:52:45.683820] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fcd10 (9): Bad file descriptor 00:34:53.404 [2024-05-15 13:52:45.683960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.404 [2024-05-15 13:52:45.684021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:53.404 [2024-05-15 13:52:45.684044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12fcd10 with addr=10.0.0.2, port=4421 00:34:53.404 [2024-05-15 13:52:45.684059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcd10 is same with the state(5) to be set 00:34:53.404 [2024-05-15 13:52:45.684083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12fcd10 (9): Bad file descriptor 00:34:53.404 [2024-05-15 13:52:45.684106] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:53.404 [2024-05-15 13:52:45.684120] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:53.404 [2024-05-15 13:52:45.684134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:53.404 [2024-05-15 13:52:45.684158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:53.404 [2024-05-15 13:52:45.684172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:53.404 [2024-05-15 13:52:55.775994] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:53.404 Received shutdown signal, test time was about 55.467422 seconds 00:34:53.404 00:34:53.404 Latency(us) 00:34:53.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.404 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:53.404 Verification LBA range: start 0x0 length 0x4000 00:34:53.404 Nvme0n1 : 55.47 7364.54 28.77 0.00 0.00 17349.84 733.56 7107438.78 00:34:53.404 =================================================================================================================== 00:34:53.404 Total : 7364.54 28.77 0.00 0.00 17349.84 733.56 7107438.78 00:34:53.404 13:53:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:53.663 rmmod nvme_tcp 00:34:53.663 rmmod nvme_fabrics 00:34:53.663 rmmod nvme_keyring 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 113998 ']' 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 113998 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 113998 ']' 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 113998 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113998 00:34:53.663 killing process with pid 113998 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113998' 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 113998 00:34:53.663 [2024-05-15 13:53:06.569454] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:34:53.663 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 113998 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:34:53.922 00:34:53.922 real 1m1.784s 00:34:53.922 user 2m55.702s 00:34:53.922 sys 0m13.450s 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:53.922 13:53:06 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:53.922 ************************************ 00:34:53.922 END TEST nvmf_host_multipath 00:34:53.922 ************************************ 00:34:53.922 13:53:06 nvmf_tcp -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:34:53.922 13:53:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:34:53.922 13:53:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:53.922 13:53:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.922 ************************************ 00:34:53.922 START TEST nvmf_timeout 00:34:53.922 ************************************ 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:34:53.922 * Looking for test storage... 00:34:53.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.922 13:53:06 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.923 13:53:06 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:53.923 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:34:54.181 Cannot find device "nvmf_tgt_br" 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:34:54.181 Cannot find device "nvmf_tgt_br2" 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:34:54.181 Cannot find device "nvmf_tgt_br" 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:34:54.181 Cannot find device "nvmf_tgt_br2" 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:54.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:54.181 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:34:54.181 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:54.182 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:54.182 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:54.182 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:34:54.182 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:34:54.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:54.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:34:54.440 00:34:54.440 --- 10.0.0.2 ping statistics --- 00:34:54.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.440 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:34:54.440 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:54.440 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:34:54.440 00:34:54.440 --- 10.0.0.3 ping statistics --- 00:34:54.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.440 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:54.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:54.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.041 ms 00:34:54.440 00:34:54.440 --- 10.0.0.1 ping statistics --- 00:34:54.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:54.440 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=115351 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 115351 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 115351 ']' 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:54.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:54.440 13:53:07 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:54.440 [2024-05-15 13:53:07.444580] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:54.440 [2024-05-15 13:53:07.444711] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:54.699 [2024-05-15 13:53:07.564958] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:54.699 [2024-05-15 13:53:07.583627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:54.699 [2024-05-15 13:53:07.690047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:54.699 [2024-05-15 13:53:07.690108] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:54.699 [2024-05-15 13:53:07.690123] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:54.699 [2024-05-15 13:53:07.690134] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:54.699 [2024-05-15 13:53:07.690143] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:54.699 [2024-05-15 13:53:07.690570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:34:54.699 [2024-05-15 13:53:07.690583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:55.634 13:53:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:55.634 [2024-05-15 13:53:08.718885] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.892 13:53:08 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:56.150 Malloc0 00:34:56.150 13:53:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:56.408 13:53:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:56.668 13:53:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:56.941 [2024-05-15 13:53:09.965693] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:34:56.941 [2024-05-15 13:53:09.966079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=115447 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 115447 /var/tmp/bdevperf.sock 00:34:56.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 115447 ']' 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:56.941 13:53:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:34:57.199 [2024-05-15 13:53:10.054380] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:34:57.199 [2024-05-15 13:53:10.054514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115447 ] 00:34:57.199 [2024-05-15 13:53:10.182719] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:57.199 [2024-05-15 13:53:10.199124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.456 [2024-05-15 13:53:10.304911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:34:58.023 13:53:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:58.023 13:53:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:34:58.023 13:53:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:34:58.281 13:53:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:34:58.847 NVMe0n1 00:34:58.847 13:53:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=115490 00:34:58.847 13:53:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:58.847 13:53:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:34:58.847 Running I/O for 10 seconds... 00:34:59.782 13:53:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:00.087 [2024-05-15 13:53:12.994421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.087 [2024-05-15 13:53:12.994488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.087 [2024-05-15 13:53:12.994513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.087 [2024-05-15 13:53:12.994524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.087 [2024-05-15 13:53:12.994536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.088 [2024-05-15 13:53:12.994545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.088 [2024-05-15 13:53:12.994566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.088 [2024-05-15 13:53:12.994587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.994987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.994999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.088 [2024-05-15 13:53:12.995379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.088 [2024-05-15 13:53:12.995390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.089 [2024-05-15 13:53:12.995403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.089 [2024-05-15 13:53:12.995423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.089 [2024-05-15 13:53:12.995443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.089 [2024-05-15 13:53:12.995464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.089 [2024-05-15 13:53:12.995495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.089 [2024-05-15 13:53:12.995516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.089 [2024-05-15 13:53:12.995536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.995986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.995997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.089 [2024-05-15 13:53:12.996241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.089 [2024-05-15 13:53:12.996250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.090 [2024-05-15 13:53:12.996270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.090 [2024-05-15 13:53:12.996290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.090 [2024-05-15 13:53:12.996310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.090 [2024-05-15 13:53:12.996330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:00.090 [2024-05-15 13:53:12.996350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:82728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.996981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:82776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.996990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:82784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:82840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.090 [2024-05-15 13:53:12.997171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.090 [2024-05-15 13:53:12.997180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.091 [2024-05-15 13:53:12.997191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.091 [2024-05-15 13:53:12.997200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.091 [2024-05-15 13:53:12.997211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.091 [2024-05-15 13:53:12.997220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.091 [2024-05-15 13:53:12.997231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:00.091 [2024-05-15 13:53:12.997240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.091 [2024-05-15 13:53:12.997251] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d68c0 is same with the state(5) to be set 00:35:00.091 [2024-05-15 13:53:12.997265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:00.091 [2024-05-15 13:53:12.997273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:00.091 [2024-05-15 13:53:12.997282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:35:00.091 [2024-05-15 13:53:12.997291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:00.091 [2024-05-15 13:53:12.997359] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13d68c0 was disconnected and freed. reset controller. 00:35:00.091 [2024-05-15 13:53:12.997636] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:00.091 [2024-05-15 13:53:12.997728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db330 (9): Bad file descriptor 00:35:00.091 [2024-05-15 13:53:12.997838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:00.091 [2024-05-15 13:53:12.997896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:00.091 [2024-05-15 13:53:12.997914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13db330 with addr=10.0.0.2, port=4420 00:35:00.091 [2024-05-15 13:53:12.997925] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db330 is same with the state(5) to be set 00:35:00.091 [2024-05-15 13:53:12.997943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db330 (9): Bad file descriptor 00:35:00.091 [2024-05-15 13:53:12.997959] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:00.091 [2024-05-15 13:53:12.997968] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:00.091 [2024-05-15 13:53:12.997979] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:00.091 [2024-05-15 13:53:12.997999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:00.091 [2024-05-15 13:53:12.998011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:00.091 13:53:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:35:02.012 [2024-05-15 13:53:14.998241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.012 [2024-05-15 13:53:14.998352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:02.012 [2024-05-15 13:53:14.998374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13db330 with addr=10.0.0.2, port=4420 00:35:02.012 [2024-05-15 13:53:14.998389] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db330 is same with the state(5) to be set 00:35:02.012 [2024-05-15 13:53:14.998419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db330 (9): Bad file descriptor 00:35:02.012 [2024-05-15 13:53:14.998453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:02.012 [2024-05-15 13:53:14.998465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:02.012 [2024-05-15 13:53:14.998480] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:02.012 [2024-05-15 13:53:14.998511] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:02.012 [2024-05-15 13:53:14.998524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:02.012 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:35:02.012 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:02.012 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:35:02.270 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:35:02.270 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:35:02.270 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:35:02.270 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:35:02.837 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:35:02.837 13:53:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:35:04.259 [2024-05-15 13:53:16.998721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.259 [2024-05-15 13:53:16.998833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:04.259 [2024-05-15 13:53:16.998854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13db330 with addr=10.0.0.2, port=4420 00:35:04.259 [2024-05-15 13:53:16.998870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13db330 is same with the state(5) to be set 00:35:04.259 [2024-05-15 13:53:16.998899] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13db330 (9): Bad file descriptor 00:35:04.259 [2024-05-15 13:53:16.998919] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:04.259 [2024-05-15 13:53:16.998929] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:04.259 [2024-05-15 13:53:16.998952] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:04.259 [2024-05-15 13:53:16.998981] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:04.259 [2024-05-15 13:53:16.998994] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:06.255 [2024-05-15 13:53:18.999073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:07.190 00:35:07.190 Latency(us) 00:35:07.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.190 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:07.190 Verification LBA range: start 0x0 length 0x4000 00:35:07.190 NVMe0n1 : 8.16 1259.60 4.92 15.69 0.00 100213.46 2308.65 7015926.69 00:35:07.190 =================================================================================================================== 00:35:07.190 Total : 1259.60 4.92 15.69 0.00 100213.46 2308.65 7015926.69 00:35:07.190 0 00:35:07.754 13:53:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:35:07.754 13:53:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:35:07.754 13:53:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:35:08.012 13:53:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:35:08.012 13:53:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:35:08.012 13:53:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:35:08.012 13:53:20 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 115490 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 115447 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 115447 ']' 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 115447 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115447 00:35:08.270 killing process with pid 115447 00:35:08.270 Received shutdown signal, test time was about 9.411286 seconds 00:35:08.270 00:35:08.270 Latency(us) 00:35:08.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:08.270 =================================================================================================================== 00:35:08.270 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115447' 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 115447 00:35:08.270 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 115447 00:35:08.528 13:53:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.786 [2024-05-15 13:53:21.682567] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=115644 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 115644 /var/tmp/bdevperf.sock 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 115644 ']' 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:08.786 13:53:21 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:35:08.786 [2024-05-15 13:53:21.748367] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:35:08.786 [2024-05-15 13:53:21.748469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115644 ] 00:35:08.786 [2024-05-15 13:53:21.871402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:08.786 [2024-05-15 13:53:21.883173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:09.044 [2024-05-15 13:53:21.982555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:09.978 13:53:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:09.978 13:53:22 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:35:09.978 13:53:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:09.978 13:53:22 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:35:10.285 NVMe0n1 00:35:10.285 13:53:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=115692 00:35:10.285 13:53:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:10.285 13:53:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:35:10.545 Running I/O for 10 seconds... 00:35:11.480 13:53:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:11.740 [2024-05-15 13:53:24.644974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd660 is same with the state(5) to be set 00:35:11.740 [2024-05-15 13:53:24.645038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd660 is same with the state(5) to be set 00:35:11.740 [2024-05-15 13:53:24.645050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dd660 is same with the state(5) to be set 00:35:11.740 [2024-05-15 13:53:24.645306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.740 [2024-05-15 13:53:24.645337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.740 [2024-05-15 13:53:24.645372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.740 [2024-05-15 13:53:24.645395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.740 [2024-05-15 13:53:24.645416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:82576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.740 [2024-05-15 13:53:24.645437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.740 [2024-05-15 13:53:24.645458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.740 [2024-05-15 13:53:24.645478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.740 [2024-05-15 13:53:24.645499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.740 [2024-05-15 13:53:24.645519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.740 [2024-05-15 13:53:24.645539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.740 [2024-05-15 13:53:24.645560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.740 [2024-05-15 13:53:24.645572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.645989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.645998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:82056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:82088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.741 [2024-05-15 13:53:24.646422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.741 [2024-05-15 13:53:24.646431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:82144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:82160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:82192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.646982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.646994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.742 [2024-05-15 13:53:24.647283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.742 [2024-05-15 13:53:24.647292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.647907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.647983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:11.743 [2024-05-15 13:53:24.647993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.648009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.648019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.648030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.648040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.648051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:11.743 [2024-05-15 13:53:24.648060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.648070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14507a0 is same with the state(5) to be set 00:35:11.743 [2024-05-15 13:53:24.648083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:11.743 [2024-05-15 13:53:24.648091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:11.743 [2024-05-15 13:53:24.648101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:35:11.743 [2024-05-15 13:53:24.648110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:11.743 [2024-05-15 13:53:24.648168] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14507a0 was disconnected and freed. reset controller. 00:35:11.743 [2024-05-15 13:53:24.648451] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:11.743 [2024-05-15 13:53:24.648546] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:11.743 [2024-05-15 13:53:24.648677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.743 [2024-05-15 13:53:24.648746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:11.744 [2024-05-15 13:53:24.648764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1455330 with addr=10.0.0.2, port=4420 00:35:11.744 [2024-05-15 13:53:24.648776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455330 is same with the state(5) to be set 00:35:11.744 [2024-05-15 13:53:24.648795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:11.744 [2024-05-15 13:53:24.648811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:11.744 [2024-05-15 13:53:24.648822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:11.744 [2024-05-15 13:53:24.648833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:11.744 [2024-05-15 13:53:24.648852] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:11.744 [2024-05-15 13:53:24.648864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:11.744 13:53:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:35:12.676 [2024-05-15 13:53:25.649022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.676 [2024-05-15 13:53:25.649132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:12.676 [2024-05-15 13:53:25.649156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1455330 with addr=10.0.0.2, port=4420 00:35:12.676 [2024-05-15 13:53:25.649170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455330 is same with the state(5) to be set 00:35:12.676 [2024-05-15 13:53:25.649198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:12.676 [2024-05-15 13:53:25.649218] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:12.676 [2024-05-15 13:53:25.649228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:12.676 [2024-05-15 13:53:25.649239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:12.676 [2024-05-15 13:53:25.649267] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:12.676 [2024-05-15 13:53:25.649278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:12.676 13:53:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:12.934 [2024-05-15 13:53:25.953256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:12.934 13:53:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 115692 00:35:13.872 [2024-05-15 13:53:26.660193] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:20.429 00:35:20.429 Latency(us) 00:35:20.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.429 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:20.429 Verification LBA range: start 0x0 length 0x4000 00:35:20.429 NVMe0n1 : 10.01 6304.11 24.63 0.00 0.00 20258.21 2010.76 3019898.88 00:35:20.429 =================================================================================================================== 00:35:20.429 Total : 6304.11 24.63 0.00 0.00 20258.21 2010.76 3019898.88 00:35:20.429 0 00:35:20.429 13:53:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=115808 00:35:20.429 13:53:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:35:20.429 13:53:33 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:20.686 Running I/O for 10 seconds... 00:35:21.619 13:53:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:21.881 [2024-05-15 13:53:34.750921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.750988] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751010] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751028] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751045] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751061] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751111] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751119] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751144] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751152] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751184] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751240] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751291] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751314] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751339] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751356] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.881 [2024-05-15 13:53:34.751429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.751437] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.751446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.751454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.751462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.751470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.751478] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.751486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13dbf10 is same with the state(5) to be set 00:35:21.882 [2024-05-15 13:53:34.752048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:81080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.882 [2024-05-15 13:53:34.752803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.882 [2024-05-15 13:53:34.752814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.752983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.752994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.883 [2024-05-15 13:53:34.753243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.883 [2024-05-15 13:53:34.753637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.883 [2024-05-15 13:53:34.753646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.753984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.753993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:21.884 [2024-05-15 13:53:34.754258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.884 [2024-05-15 13:53:34.754438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.884 [2024-05-15 13:53:34.754449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:21.885 [2024-05-15 13:53:34.754755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:21.885 [2024-05-15 13:53:34.754792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:21.885 [2024-05-15 13:53:34.754801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81648 len:8 PRP1 0x0 PRP2 0x0 00:35:21.885 [2024-05-15 13:53:34.754810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:21.885 [2024-05-15 13:53:34.754866] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1482980 was disconnected and freed. reset controller. 00:35:21.885 [2024-05-15 13:53:34.755094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.885 [2024-05-15 13:53:34.755182] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:21.885 [2024-05-15 13:53:34.755288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.885 [2024-05-15 13:53:34.755336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:21.885 [2024-05-15 13:53:34.755352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1455330 with addr=10.0.0.2, port=4420 00:35:21.885 [2024-05-15 13:53:34.755363] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455330 is same with the state(5) to be set 00:35:21.885 [2024-05-15 13:53:34.755381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:21.885 [2024-05-15 13:53:34.755399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:21.885 [2024-05-15 13:53:34.755409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:21.885 [2024-05-15 13:53:34.755420] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:21.885 [2024-05-15 13:53:34.755439] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:21.885 [2024-05-15 13:53:34.755452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:21.885 13:53:34 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:35:22.818 [2024-05-15 13:53:35.755632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.818 [2024-05-15 13:53:35.755749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:22.818 [2024-05-15 13:53:35.755770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1455330 with addr=10.0.0.2, port=4420 00:35:22.818 [2024-05-15 13:53:35.755784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455330 is same with the state(5) to be set 00:35:22.818 [2024-05-15 13:53:35.755812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:22.818 [2024-05-15 13:53:35.755833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:22.818 [2024-05-15 13:53:35.755843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:22.818 [2024-05-15 13:53:35.755861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:22.818 [2024-05-15 13:53:35.755889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:22.818 [2024-05-15 13:53:35.755902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:23.752 [2024-05-15 13:53:36.756068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.752 [2024-05-15 13:53:36.756200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:23.752 [2024-05-15 13:53:36.756223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1455330 with addr=10.0.0.2, port=4420 00:35:23.752 [2024-05-15 13:53:36.756239] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455330 is same with the state(5) to be set 00:35:23.752 [2024-05-15 13:53:36.756269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:23.752 [2024-05-15 13:53:36.756289] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:23.752 [2024-05-15 13:53:36.756299] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:23.752 [2024-05-15 13:53:36.756310] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:23.752 [2024-05-15 13:53:36.756344] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:23.752 [2024-05-15 13:53:36.756358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.686 [2024-05-15 13:53:37.759974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.686 [2024-05-15 13:53:37.760085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:24.686 [2024-05-15 13:53:37.760106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1455330 with addr=10.0.0.2, port=4420 00:35:24.686 [2024-05-15 13:53:37.760120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1455330 is same with the state(5) to be set 00:35:24.686 [2024-05-15 13:53:37.760383] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1455330 (9): Bad file descriptor 00:35:24.686 [2024-05-15 13:53:37.760666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:24.686 [2024-05-15 13:53:37.760682] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:24.686 [2024-05-15 13:53:37.760694] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:24.686 [2024-05-15 13:53:37.764596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:24.686 [2024-05-15 13:53:37.764641] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:24.686 13:53:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:24.943 [2024-05-15 13:53:37.996471] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:24.943 13:53:38 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 115808 00:35:25.878 [2024-05-15 13:53:38.798264] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:35:31.145 00:35:31.145 Latency(us) 00:35:31.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.145 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:31.145 Verification LBA range: start 0x0 length 0x4000 00:35:31.145 NVMe0n1 : 10.01 5271.13 20.59 3545.98 0.00 14486.20 640.47 3019898.88 00:35:31.145 =================================================================================================================== 00:35:31.145 Total : 5271.13 20.59 3545.98 0.00 14486.20 0.00 3019898.88 00:35:31.145 0 00:35:31.145 13:53:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 115644 00:35:31.145 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 115644 ']' 00:35:31.145 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 115644 00:35:31.145 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:35:31.145 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115644 00:35:31.146 killing process with pid 115644 00:35:31.146 Received shutdown signal, test time was about 10.000000 seconds 00:35:31.146 00:35:31.146 Latency(us) 00:35:31.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:31.146 =================================================================================================================== 00:35:31.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115644' 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 115644 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 115644 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=115925 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 115925 /var/tmp/bdevperf.sock 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 115925 ']' 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:31.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:31.146 13:53:43 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:35:31.146 [2024-05-15 13:53:43.955107] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:35:31.146 [2024-05-15 13:53:43.955239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115925 ] 00:35:31.146 [2024-05-15 13:53:44.079377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:31.146 [2024-05-15 13:53:44.101200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:31.146 [2024-05-15 13:53:44.210842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.080 13:53:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:32.080 13:53:45 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:35:32.080 13:53:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=115953 00:35:32.080 13:53:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:35:32.080 13:53:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 115925 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:35:32.338 13:53:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:35:32.905 NVMe0n1 00:35:32.905 13:53:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=116007 00:35:32.905 13:53:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:35:32.905 13:53:45 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:35:32.905 Running I/O for 10 seconds... 00:35:33.842 13:53:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.104 [2024-05-15 13:53:47.105927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.105984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.105996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106056] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106132] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106140] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106226] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106245] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106254] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106278] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106318] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106367] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106390] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106407] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106422] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106430] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106446] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.104 [2024-05-15 13:53:47.106493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106526] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106576] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.106650] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e0790 is same with the state(5) to be set 00:35:34.105 [2024-05-15 13:53:47.107089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:26336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:84552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:112168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.105 [2024-05-15 13:53:47.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.105 [2024-05-15 13:53:47.107851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.107862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.107880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.107892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.107900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.107911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.107921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.107931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.107940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.107952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.107968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.107978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.107987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.107997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:48864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.106 [2024-05-15 13:53:47.108739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.106 [2024-05-15 13:53:47.108750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:125472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.108970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.108988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:26736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.107 [2024-05-15 13:53:47.109712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.107 [2024-05-15 13:53:47.109726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.109982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.109992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.110003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:34.108 [2024-05-15 13:53:47.110012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.110039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:34.108 [2024-05-15 13:53:47.110054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:34.108 [2024-05-15 13:53:47.110064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12040 len:8 PRP1 0x0 PRP2 0x0 00:35:34.108 [2024-05-15 13:53:47.110079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:34.108 [2024-05-15 13:53:47.110134] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d48c0 was disconnected and freed. reset controller. 00:35:34.108 [2024-05-15 13:53:47.110424] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.108 [2024-05-15 13:53:47.110518] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9330 (9): Bad file descriptor 00:35:34.108 [2024-05-15 13:53:47.110657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.108 [2024-05-15 13:53:47.110715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:34.108 [2024-05-15 13:53:47.110732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9330 with addr=10.0.0.2, port=4420 00:35:34.108 [2024-05-15 13:53:47.110743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9330 is same with the state(5) to be set 00:35:34.108 [2024-05-15 13:53:47.110762] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9330 (9): Bad file descriptor 00:35:34.108 [2024-05-15 13:53:47.110778] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:34.108 [2024-05-15 13:53:47.110787] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:34.108 [2024-05-15 13:53:47.110797] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:34.108 [2024-05-15 13:53:47.110818] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:34.108 [2024-05-15 13:53:47.110828] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:34.108 13:53:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 116007 00:35:36.642 [2024-05-15 13:53:49.111037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.642 [2024-05-15 13:53:49.111139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:36.642 [2024-05-15 13:53:49.111159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9330 with addr=10.0.0.2, port=4420 00:35:36.642 [2024-05-15 13:53:49.111173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9330 is same with the state(5) to be set 00:35:36.642 [2024-05-15 13:53:49.111202] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9330 (9): Bad file descriptor 00:35:36.642 [2024-05-15 13:53:49.111237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:36.642 [2024-05-15 13:53:49.111249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:36.642 [2024-05-15 13:53:49.111260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:36.642 [2024-05-15 13:53:49.111289] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:36.642 [2024-05-15 13:53:49.111302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:38.014 [2024-05-15 13:53:51.111583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.014 [2024-05-15 13:53:51.111724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:35:38.014 [2024-05-15 13:53:51.111746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22d9330 with addr=10.0.0.2, port=4420 00:35:38.014 [2024-05-15 13:53:51.111760] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22d9330 is same with the state(5) to be set 00:35:38.014 [2024-05-15 13:53:51.111794] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d9330 (9): Bad file descriptor 00:35:38.014 [2024-05-15 13:53:51.111815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:35:38.014 [2024-05-15 13:53:51.111825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:35:38.014 [2024-05-15 13:53:51.111845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:35:38.014 [2024-05-15 13:53:51.111874] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:38.014 [2024-05-15 13:53:51.111886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:35:40.578 [2024-05-15 13:53:53.111973] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:35:41.144 00:35:41.144 Latency(us) 00:35:41.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.144 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:35:41.144 NVMe0n1 : 8.16 2524.17 9.86 15.68 0.00 50344.35 2532.07 7015926.69 00:35:41.144 =================================================================================================================== 00:35:41.144 Total : 2524.17 9.86 15.68 0.00 50344.35 2532.07 7015926.69 00:35:41.144 0 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:41.144 Attaching 5 probes... 00:35:41.144 1492.611361: reset bdev controller NVMe0 00:35:41.144 1492.767664: reconnect bdev controller NVMe0 00:35:41.144 3493.101269: reconnect delay bdev controller NVMe0 00:35:41.144 3493.124765: reconnect bdev controller NVMe0 00:35:41.144 5493.600938: reconnect delay bdev controller NVMe0 00:35:41.144 5493.646934: reconnect bdev controller NVMe0 00:35:41.144 7494.124771: reconnect delay bdev controller NVMe0 00:35:41.144 7494.149714: reconnect bdev controller NVMe0 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 115953 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 115925 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 115925 ']' 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 115925 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115925 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:35:41.144 killing process with pid 115925 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115925' 00:35:41.144 Received shutdown signal, test time was about 8.216060 seconds 00:35:41.144 00:35:41.144 Latency(us) 00:35:41.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.144 =================================================================================================================== 00:35:41.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 115925 00:35:41.144 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 115925 00:35:41.401 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:41.660 rmmod nvme_tcp 00:35:41.660 rmmod nvme_fabrics 00:35:41.660 rmmod nvme_keyring 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 115351 ']' 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 115351 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 115351 ']' 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 115351 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 115351 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:41.660 killing process with pid 115351 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 115351' 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 115351 00:35:41.660 [2024-05-15 13:53:54.703104] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:41.660 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 115351 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:35:41.919 00:35:41.919 real 0m48.069s 00:35:41.919 user 2m22.193s 00:35:41.919 sys 0m5.205s 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:41.919 ************************************ 00:35:41.919 13:53:54 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:35:41.919 END TEST nvmf_timeout 00:35:41.919 ************************************ 00:35:42.177 13:53:55 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:35:42.177 13:53:55 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:35:42.177 13:53:55 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:42.177 13:53:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.177 13:53:55 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:35:42.177 00:35:42.177 real 21m52.719s 00:35:42.177 user 65m39.880s 00:35:42.177 sys 4m31.094s 00:35:42.177 13:53:55 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:42.177 ************************************ 00:35:42.177 END TEST nvmf_tcp 00:35:42.177 13:53:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.177 ************************************ 00:35:42.177 13:53:55 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:35:42.177 13:53:55 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:42.177 13:53:55 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:42.177 13:53:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:42.177 13:53:55 -- common/autotest_common.sh@10 -- # set +x 00:35:42.177 ************************************ 00:35:42.177 START TEST spdkcli_nvmf_tcp 00:35:42.177 ************************************ 00:35:42.177 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:42.177 * Looking for test storage... 00:35:42.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:42.177 13:53:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=116220 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 116220 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 116220 ']' 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:42.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:42.178 13:53:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:42.436 [2024-05-15 13:53:55.278915] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:35:42.436 [2024-05-15 13:53:55.278999] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116220 ] 00:35:42.436 [2024-05-15 13:53:55.396987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:42.436 [2024-05-15 13:53:55.411558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:42.436 [2024-05-15 13:53:55.492693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.436 [2024-05-15 13:53:55.492701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:43.372 13:53:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:43.373 13:53:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:43.373 13:53:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.373 13:53:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:43.373 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:43.373 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:43.373 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:43.373 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:43.373 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:43.373 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:43.373 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:43.373 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:43.373 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:43.373 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:43.373 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:43.373 ' 00:35:46.708 [2024-05-15 13:53:59.040110] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:47.272 [2024-05-15 13:54:00.316890] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:35:47.272 [2024-05-15 13:54:00.317231] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:49.798 [2024-05-15 13:54:02.718776] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:51.710 [2024-05-15 13:54:04.784249] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:53.607 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:53.607 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:53.607 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:53.607 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:53.607 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:53.607 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:53.607 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:53.607 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:53.607 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:53.607 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:53.607 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:53.607 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:53.607 13:54:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:35:53.866 13:54:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:53.866 13:54:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:53.866 13:54:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:53.866 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:53.866 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:54.124 13:54:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:54.124 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:35:54.124 13:54:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:54.124 13:54:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:54.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:54.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:54.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:54.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:54.124 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:54.124 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:54.124 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:54.124 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:54.124 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:54.124 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:54.124 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:54.124 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:54.124 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:54.124 ' 00:35:59.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:59.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:59.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:59.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:59.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:59.391 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:59.391 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:59.391 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:59.391 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:59.391 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:59.391 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:59.391 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:59.391 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:59.391 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 116220 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 116220 ']' 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 116220 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 116220 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:59.391 killing process with pid 116220 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 116220' 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 116220 00:35:59.391 [2024-05-15 13:54:12.373885] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:35:59.391 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 116220 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 116220 ']' 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 116220 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 116220 ']' 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 116220 00:35:59.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (116220) - No such process 00:35:59.651 Process with pid 116220 is not found 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 116220 is not found' 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:59.651 00:35:59.651 real 0m17.472s 00:35:59.651 user 0m37.719s 00:35:59.651 sys 0m0.933s 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:59.651 13:54:12 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:59.651 ************************************ 00:35:59.651 END TEST spdkcli_nvmf_tcp 00:35:59.651 ************************************ 00:35:59.651 13:54:12 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:59.651 13:54:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:35:59.651 13:54:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:59.651 13:54:12 -- common/autotest_common.sh@10 -- # set +x 00:35:59.651 ************************************ 00:35:59.651 START TEST nvmf_identify_passthru 00:35:59.651 ************************************ 00:35:59.651 13:54:12 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:59.651 * Looking for test storage... 00:35:59.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:59.651 13:54:12 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:59.651 13:54:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.651 13:54:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.651 13:54:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:59.651 13:54:12 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:59.651 13:54:12 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.651 13:54:12 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.651 13:54:12 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:59.651 13:54:12 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.651 13:54:12 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:59.651 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.651 13:54:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:59.651 13:54:12 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:35:59.911 Cannot find device "nvmf_tgt_br" 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:35:59.911 Cannot find device "nvmf_tgt_br2" 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:35:59.911 Cannot find device "nvmf_tgt_br" 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:35:59.911 Cannot find device "nvmf_tgt_br2" 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:59.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:59.911 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:35:59.911 13:54:12 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:35:59.911 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:00.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:00.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:36:00.170 00:36:00.170 --- 10.0.0.2 ping statistics --- 00:36:00.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.170 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:00.170 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:00.170 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:36:00.170 00:36:00.170 --- 10.0.0.3 ping statistics --- 00:36:00.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.170 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:00.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:00.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:36:00.170 00:36:00.170 --- 10.0.0.1 ping statistics --- 00:36:00.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:00.170 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:00.170 13:54:13 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:00.170 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.170 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:00.170 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:36:00.171 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:36:00.171 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:36:00.171 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:00.171 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:36:00.171 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:36:00.171 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:36:00.171 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:36:00.171 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:00.171 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:00.429 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:36:00.429 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:36:00.429 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:00.429 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:00.429 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:36:00.429 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:00.429 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:00.429 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.429 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:00.429 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:00.429 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.687 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=116708 00:36:00.687 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:00.687 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:00.687 13:54:13 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 116708 00:36:00.687 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 116708 ']' 00:36:00.687 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:00.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:00.687 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:00.687 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:00.687 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:00.687 13:54:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:00.687 [2024-05-15 13:54:13.584275] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:36:00.687 [2024-05-15 13:54:13.584383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:00.687 [2024-05-15 13:54:13.704813] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:00.687 [2024-05-15 13:54:13.723329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:00.945 [2024-05-15 13:54:13.826183] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.945 [2024-05-15 13:54:13.826253] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.945 [2024-05-15 13:54:13.826267] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.945 [2024-05-15 13:54:13.826278] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.945 [2024-05-15 13:54:13.826287] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.945 [2024-05-15 13:54:13.826472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.945 [2024-05-15 13:54:13.826630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.945 [2024-05-15 13:54:13.827316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:36:00.945 [2024-05-15 13:54:13.827365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:36:01.880 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.880 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.880 [2024-05-15 13:54:14.748384] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.880 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.880 [2024-05-15 13:54:14.762271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.880 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.880 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.880 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.881 Nvme0n1 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.881 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.881 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.881 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.881 [2024-05-15 13:54:14.899138] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:01.881 [2024-05-15 13:54:14.899416] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.881 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:01.881 [ 00:36:01.881 { 00:36:01.881 "allow_any_host": true, 00:36:01.881 "hosts": [], 00:36:01.881 "listen_addresses": [], 00:36:01.881 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:01.881 "subtype": "Discovery" 00:36:01.881 }, 00:36:01.881 { 00:36:01.881 "allow_any_host": true, 00:36:01.881 "hosts": [], 00:36:01.881 "listen_addresses": [ 00:36:01.881 { 00:36:01.881 "adrfam": "IPv4", 00:36:01.881 "traddr": "10.0.0.2", 00:36:01.881 "trsvcid": "4420", 00:36:01.881 "trtype": "TCP" 00:36:01.881 } 00:36:01.881 ], 00:36:01.881 "max_cntlid": 65519, 00:36:01.881 "max_namespaces": 1, 00:36:01.881 "min_cntlid": 1, 00:36:01.881 "model_number": "SPDK bdev Controller", 00:36:01.881 "namespaces": [ 00:36:01.881 { 00:36:01.881 "bdev_name": "Nvme0n1", 00:36:01.881 "name": "Nvme0n1", 00:36:01.881 "nguid": "350ADD2066FA4639930DEDB703CBA39C", 00:36:01.881 "nsid": 1, 00:36:01.881 "uuid": "350add20-66fa-4639-930d-edb703cba39c" 00:36:01.881 } 00:36:01.881 ], 00:36:01.881 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:01.881 "serial_number": "SPDK00000000000001", 00:36:01.881 "subtype": "NVMe" 00:36:01.881 } 00:36:01.881 ] 00:36:01.881 13:54:14 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:01.881 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:01.881 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:01.881 13:54:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:02.139 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:36:02.139 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:02.139 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:02.139 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:02.398 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:36:02.398 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:36:02.398 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:36:02.398 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:02.398 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:02.398 13:54:15 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:02.398 rmmod nvme_tcp 00:36:02.398 rmmod nvme_fabrics 00:36:02.398 rmmod nvme_keyring 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 116708 ']' 00:36:02.398 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 116708 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 116708 ']' 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 116708 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 116708 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:02.398 killing process with pid 116708 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 116708' 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 116708 00:36:02.398 [2024-05-15 13:54:15.475856] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:36:02.398 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 116708 00:36:02.657 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:36:02.657 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:02.657 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:02.657 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:02.657 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:02.657 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.657 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.657 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.657 13:54:15 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:36:02.657 00:36:02.657 real 0m3.101s 00:36:02.657 user 0m7.947s 00:36:02.657 sys 0m0.765s 00:36:02.657 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:02.657 13:54:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:02.657 ************************************ 00:36:02.657 END TEST nvmf_identify_passthru 00:36:02.657 ************************************ 00:36:02.916 13:54:15 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:36:02.916 13:54:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:02.916 13:54:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:02.916 13:54:15 -- common/autotest_common.sh@10 -- # set +x 00:36:02.916 ************************************ 00:36:02.916 START TEST nvmf_dif 00:36:02.916 ************************************ 00:36:02.916 13:54:15 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:36:02.916 * Looking for test storage... 00:36:02.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:02.916 13:54:15 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:02.916 13:54:15 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.916 13:54:15 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.916 13:54:15 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.916 13:54:15 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.916 13:54:15 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.916 13:54:15 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.916 13:54:15 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:02.916 13:54:15 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:02.916 13:54:15 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:02.916 13:54:15 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:02.916 13:54:15 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:02.916 13:54:15 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:02.916 13:54:15 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.916 13:54:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.916 13:54:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:36:02.916 Cannot find device "nvmf_tgt_br" 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@155 -- # true 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:36:02.916 Cannot find device "nvmf_tgt_br2" 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@156 -- # true 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:36:02.916 Cannot find device "nvmf_tgt_br" 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@158 -- # true 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:36:02.916 Cannot find device "nvmf_tgt_br2" 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@159 -- # true 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:36:02.916 13:54:15 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:03.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@162 -- # true 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:03.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@163 -- # true 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:36:03.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:03.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:36:03.175 00:36:03.175 --- 10.0.0.2 ping statistics --- 00:36:03.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.175 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:36:03.175 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:03.175 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:36:03.175 00:36:03.175 --- 10.0.0.3 ping statistics --- 00:36:03.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.175 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:03.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:03.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:36:03.175 00:36:03.175 --- 10.0.0.1 ping statistics --- 00:36:03.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:03.175 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:03.175 13:54:16 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:03.434 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:03.434 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:03.434 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:03.693 13:54:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:03.693 13:54:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=117052 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:03.693 13:54:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 117052 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 117052 ']' 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:03.693 13:54:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.693 [2024-05-15 13:54:16.649677] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:36:03.693 [2024-05-15 13:54:16.649757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:03.693 [2024-05-15 13:54:16.769224] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:03.693 [2024-05-15 13:54:16.786890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.952 [2024-05-15 13:54:16.883664] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:03.952 [2024-05-15 13:54:16.883727] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:03.952 [2024-05-15 13:54:16.883741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:03.952 [2024-05-15 13:54:16.883751] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:03.952 [2024-05-15 13:54:16.883760] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:03.952 [2024-05-15 13:54:16.883793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:36:04.887 13:54:17 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:04.887 13:54:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.887 13:54:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:04.887 13:54:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:04.887 [2024-05-15 13:54:17.682030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.887 13:54:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:04.887 13:54:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:04.887 ************************************ 00:36:04.887 START TEST fio_dif_1_default 00:36:04.887 ************************************ 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:04.887 bdev_null0 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:04.887 [2024-05-15 13:54:17.729960] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:36:04.887 [2024-05-15 13:54:17.730211] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:04.887 { 00:36:04.887 "params": { 00:36:04.887 "name": "Nvme$subsystem", 00:36:04.887 "trtype": "$TEST_TRANSPORT", 00:36:04.887 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.887 "adrfam": "ipv4", 00:36:04.887 "trsvcid": "$NVMF_PORT", 00:36:04.887 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.887 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.887 "hdgst": ${hdgst:-false}, 00:36:04.887 "ddgst": ${ddgst:-false} 00:36:04.887 }, 00:36:04.887 "method": "bdev_nvme_attach_controller" 00:36:04.887 } 00:36:04.887 EOF 00:36:04.887 )") 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:04.887 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:04.888 "params": { 00:36:04.888 "name": "Nvme0", 00:36:04.888 "trtype": "tcp", 00:36:04.888 "traddr": "10.0.0.2", 00:36:04.888 "adrfam": "ipv4", 00:36:04.888 "trsvcid": "4420", 00:36:04.888 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:04.888 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:04.888 "hdgst": false, 00:36:04.888 "ddgst": false 00:36:04.888 }, 00:36:04.888 "method": "bdev_nvme_attach_controller" 00:36:04.888 }' 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:04.888 13:54:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:04.888 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:04.888 fio-3.35 00:36:04.888 Starting 1 thread 00:36:17.091 00:36:17.091 filename0: (groupid=0, jobs=1): err= 0: pid=117138: Wed May 15 13:54:28 2024 00:36:17.091 read: IOPS=1201, BW=4806KiB/s (4921kB/s)(46.9MiB/10001msec) 00:36:17.091 slat (nsec): min=6629, max=45908, avg=8645.86, stdev=2455.17 00:36:17.091 clat (usec): min=408, max=42521, avg=3303.34, stdev=10323.30 00:36:17.091 lat (usec): min=414, max=42531, avg=3311.99, stdev=10323.34 00:36:17.091 clat percentiles (usec): 00:36:17.091 | 1.00th=[ 449], 5.00th=[ 453], 10.00th=[ 457], 20.00th=[ 465], 00:36:17.091 | 30.00th=[ 469], 40.00th=[ 474], 50.00th=[ 478], 60.00th=[ 486], 00:36:17.091 | 70.00th=[ 490], 80.00th=[ 498], 90.00th=[ 519], 95.00th=[40633], 00:36:17.091 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:36:17.091 | 99.99th=[42730] 00:36:17.091 bw ( KiB/s): min= 3136, max= 7872, per=99.23%, avg=4769.68, stdev=1161.07, samples=19 00:36:17.091 iops : min= 784, max= 1968, avg=1192.42, stdev=290.27, samples=19 00:36:17.091 lat (usec) : 500=81.01%, 750=12.00% 00:36:17.091 lat (msec) : 10=0.03%, 50=6.96% 00:36:17.091 cpu : usr=90.37%, sys=8.86%, ctx=31, majf=0, minf=0 00:36:17.091 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:17.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:17.091 issued rwts: total=12016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:17.091 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:17.091 00:36:17.091 Run status group 0 (all jobs): 00:36:17.091 READ: bw=4806KiB/s (4921kB/s), 4806KiB/s-4806KiB/s (4921kB/s-4921kB/s), io=46.9MiB (49.2MB), run=10001-10001msec 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 00:36:17.091 real 0m11.014s 00:36:17.091 user 0m9.690s 00:36:17.091 sys 0m1.150s 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 ************************************ 00:36:17.091 END TEST fio_dif_1_default 00:36:17.091 ************************************ 00:36:17.091 13:54:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:17.091 13:54:28 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:17.091 13:54:28 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 ************************************ 00:36:17.091 START TEST fio_dif_1_multi_subsystems 00:36:17.091 ************************************ 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 bdev_null0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 [2024-05-15 13:54:28.797945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 bdev_null1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:17.091 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:17.091 { 00:36:17.091 "params": { 00:36:17.091 "name": "Nvme$subsystem", 00:36:17.091 "trtype": "$TEST_TRANSPORT", 00:36:17.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.091 "adrfam": "ipv4", 00:36:17.091 "trsvcid": "$NVMF_PORT", 00:36:17.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.092 "hdgst": ${hdgst:-false}, 00:36:17.092 "ddgst": ${ddgst:-false} 00:36:17.092 }, 00:36:17.092 "method": "bdev_nvme_attach_controller" 00:36:17.092 } 00:36:17.092 EOF 00:36:17.092 )") 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:17.092 { 00:36:17.092 "params": { 00:36:17.092 "name": "Nvme$subsystem", 00:36:17.092 "trtype": "$TEST_TRANSPORT", 00:36:17.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:17.092 "adrfam": "ipv4", 00:36:17.092 "trsvcid": "$NVMF_PORT", 00:36:17.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:17.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:17.092 "hdgst": ${hdgst:-false}, 00:36:17.092 "ddgst": ${ddgst:-false} 00:36:17.092 }, 00:36:17.092 "method": "bdev_nvme_attach_controller" 00:36:17.092 } 00:36:17.092 EOF 00:36:17.092 )") 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:17.092 "params": { 00:36:17.092 "name": "Nvme0", 00:36:17.092 "trtype": "tcp", 00:36:17.092 "traddr": "10.0.0.2", 00:36:17.092 "adrfam": "ipv4", 00:36:17.092 "trsvcid": "4420", 00:36:17.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.092 "hdgst": false, 00:36:17.092 "ddgst": false 00:36:17.092 }, 00:36:17.092 "method": "bdev_nvme_attach_controller" 00:36:17.092 },{ 00:36:17.092 "params": { 00:36:17.092 "name": "Nvme1", 00:36:17.092 "trtype": "tcp", 00:36:17.092 "traddr": "10.0.0.2", 00:36:17.092 "adrfam": "ipv4", 00:36:17.092 "trsvcid": "4420", 00:36:17.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:17.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:17.092 "hdgst": false, 00:36:17.092 "ddgst": false 00:36:17.092 }, 00:36:17.092 "method": "bdev_nvme_attach_controller" 00:36:17.092 }' 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:17.092 13:54:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:17.092 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:17.092 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:17.092 fio-3.35 00:36:17.092 Starting 2 threads 00:36:27.078 00:36:27.078 filename0: (groupid=0, jobs=1): err= 0: pid=117292: Wed May 15 13:54:39 2024 00:36:27.078 read: IOPS=188, BW=754KiB/s (772kB/s)(7568KiB/10034msec) 00:36:27.078 slat (nsec): min=6680, max=64006, avg=9727.30, stdev=4819.04 00:36:27.078 clat (usec): min=434, max=42502, avg=21182.89, stdev=20296.18 00:36:27.078 lat (usec): min=442, max=42513, avg=21192.62, stdev=20296.25 00:36:27.078 clat percentiles (usec): 00:36:27.078 | 1.00th=[ 453], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 486], 00:36:27.078 | 30.00th=[ 498], 40.00th=[ 515], 50.00th=[40633], 60.00th=[41157], 00:36:27.078 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:27.078 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:27.078 | 99.99th=[42730] 00:36:27.078 bw ( KiB/s): min= 544, max= 992, per=51.24%, avg=755.10, stdev=135.57, samples=20 00:36:27.078 iops : min= 136, max= 248, avg=188.75, stdev=33.90, samples=20 00:36:27.078 lat (usec) : 500=32.29%, 750=15.49%, 1000=1.06% 00:36:27.078 lat (msec) : 2=0.21%, 50=50.95% 00:36:27.078 cpu : usr=95.15%, sys=4.46%, ctx=15, majf=0, minf=0 00:36:27.078 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.078 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.078 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:27.078 filename1: (groupid=0, jobs=1): err= 0: pid=117293: Wed May 15 13:54:39 2024 00:36:27.078 read: IOPS=179, BW=720KiB/s (737kB/s)(7216KiB/10027msec) 00:36:27.078 slat (nsec): min=6925, max=47715, avg=9945.59, stdev=4370.88 00:36:27.078 clat (usec): min=445, max=42505, avg=22201.23, stdev=20253.98 00:36:27.078 lat (usec): min=453, max=42516, avg=22211.18, stdev=20254.15 00:36:27.078 clat percentiles (usec): 00:36:27.078 | 1.00th=[ 457], 5.00th=[ 465], 10.00th=[ 478], 20.00th=[ 486], 00:36:27.078 | 30.00th=[ 502], 40.00th=[ 523], 50.00th=[40633], 60.00th=[41157], 00:36:27.078 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:36:27.078 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:27.078 | 99.99th=[42730] 00:36:27.078 bw ( KiB/s): min= 480, max= 1088, per=48.80%, avg=719.90, stdev=151.00, samples=20 00:36:27.078 iops : min= 120, max= 272, avg=179.95, stdev=37.75, samples=20 00:36:27.078 lat (usec) : 500=29.71%, 750=14.63%, 1000=2.00% 00:36:27.078 lat (msec) : 2=0.22%, 50=53.44% 00:36:27.078 cpu : usr=95.24%, sys=4.31%, ctx=9, majf=0, minf=0 00:36:27.078 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:27.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:27.078 issued rwts: total=1804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:27.078 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:27.078 00:36:27.078 Run status group 0 (all jobs): 00:36:27.078 READ: bw=1473KiB/s (1509kB/s), 720KiB/s-754KiB/s (737kB/s-772kB/s), io=14.4MiB (15.1MB), run=10027-10034msec 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.078 00:36:27.078 real 0m11.228s 00:36:27.078 user 0m19.916s 00:36:27.078 sys 0m1.175s 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:27.078 13:54:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 ************************************ 00:36:27.078 END TEST fio_dif_1_multi_subsystems 00:36:27.078 ************************************ 00:36:27.078 13:54:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:27.078 13:54:40 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:27.078 13:54:40 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:27.078 13:54:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 ************************************ 00:36:27.078 START TEST fio_dif_rand_params 00:36:27.078 ************************************ 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 bdev_null0 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.078 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:27.079 [2024-05-15 13:54:40.083078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:27.079 { 00:36:27.079 "params": { 00:36:27.079 "name": "Nvme$subsystem", 00:36:27.079 "trtype": "$TEST_TRANSPORT", 00:36:27.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:27.079 "adrfam": "ipv4", 00:36:27.079 "trsvcid": "$NVMF_PORT", 00:36:27.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:27.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:27.079 "hdgst": ${hdgst:-false}, 00:36:27.079 "ddgst": ${ddgst:-false} 00:36:27.079 }, 00:36:27.079 "method": "bdev_nvme_attach_controller" 00:36:27.079 } 00:36:27.079 EOF 00:36:27.079 )") 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:27.079 "params": { 00:36:27.079 "name": "Nvme0", 00:36:27.079 "trtype": "tcp", 00:36:27.079 "traddr": "10.0.0.2", 00:36:27.079 "adrfam": "ipv4", 00:36:27.079 "trsvcid": "4420", 00:36:27.079 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.079 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.079 "hdgst": false, 00:36:27.079 "ddgst": false 00:36:27.079 }, 00:36:27.079 "method": "bdev_nvme_attach_controller" 00:36:27.079 }' 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:27.079 13:54:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:27.337 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:27.337 ... 00:36:27.337 fio-3.35 00:36:27.337 Starting 3 threads 00:36:33.936 00:36:33.936 filename0: (groupid=0, jobs=1): err= 0: pid=117444: Wed May 15 13:54:45 2024 00:36:33.936 read: IOPS=192, BW=24.1MiB/s (25.2MB/s)(120MiB/5005msec) 00:36:33.936 slat (nsec): min=7467, max=34277, avg=12426.04, stdev=5181.37 00:36:33.936 clat (usec): min=8103, max=19590, avg=15559.38, stdev=2455.43 00:36:33.936 lat (usec): min=8113, max=19613, avg=15571.81, stdev=2455.51 00:36:33.936 clat percentiles (usec): 00:36:33.936 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[14484], 00:36:33.936 | 30.00th=[15270], 40.00th=[15795], 50.00th=[16188], 60.00th=[16581], 00:36:33.936 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18220], 00:36:33.936 | 99.00th=[19006], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:36:33.936 | 99.99th=[19530] 00:36:33.936 bw ( KiB/s): min=21504, max=32256, per=28.75%, avg=24576.00, stdev=2850.70, samples=10 00:36:33.936 iops : min= 168, max= 252, avg=192.00, stdev=22.27, samples=10 00:36:33.936 lat (msec) : 10=7.17%, 20=92.83% 00:36:33.936 cpu : usr=92.25%, sys=6.16%, ctx=6, majf=0, minf=9 00:36:33.936 IO depths : 1=30.3%, 2=69.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.936 issued rwts: total=963,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.936 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.936 filename0: (groupid=0, jobs=1): err= 0: pid=117445: Wed May 15 13:54:45 2024 00:36:33.936 read: IOPS=250, BW=31.3MiB/s (32.8MB/s)(157MiB/5007msec) 00:36:33.936 slat (nsec): min=7600, max=46925, avg=15131.81, stdev=4051.85 00:36:33.936 clat (usec): min=6249, max=52883, avg=11963.88, stdev=4577.12 00:36:33.936 lat (usec): min=6263, max=52925, avg=11979.02, stdev=4577.28 00:36:33.936 clat percentiles (usec): 00:36:33.936 | 1.00th=[ 7177], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[10683], 00:36:33.936 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:36:33.936 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13304], 00:36:33.936 | 99.00th=[51119], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:36:33.936 | 99.99th=[52691] 00:36:33.936 bw ( KiB/s): min=24576, max=34560, per=37.46%, avg=32025.60, stdev=2919.97, samples=10 00:36:33.936 iops : min= 192, max= 270, avg=250.20, stdev=22.81, samples=10 00:36:33.936 lat (msec) : 10=10.30%, 20=88.51%, 100=1.20% 00:36:33.936 cpu : usr=91.03%, sys=7.13%, ctx=31, majf=0, minf=9 00:36:33.936 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.936 issued rwts: total=1253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.936 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.936 filename0: (groupid=0, jobs=1): err= 0: pid=117446: Wed May 15 13:54:45 2024 00:36:33.936 read: IOPS=225, BW=28.2MiB/s (29.5MB/s)(141MiB/5005msec) 00:36:33.936 slat (nsec): min=7718, max=51827, avg=14884.88, stdev=5043.83 00:36:33.936 clat (usec): min=5987, max=55882, avg=13285.26, stdev=5007.87 00:36:33.936 lat (usec): min=6000, max=55891, avg=13300.15, stdev=5007.98 00:36:33.936 clat percentiles (usec): 00:36:33.936 | 1.00th=[ 7242], 5.00th=[ 8455], 10.00th=[10814], 20.00th=[11731], 00:36:33.936 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13435], 00:36:33.936 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:36:33.936 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[55837], 00:36:33.936 | 99.99th=[55837] 00:36:33.936 bw ( KiB/s): min=25088, max=33792, per=33.75%, avg=28851.20, stdev=2428.78, samples=10 00:36:33.936 iops : min= 196, max= 264, avg=225.40, stdev=18.97, samples=10 00:36:33.936 lat (msec) : 10=7.27%, 20=91.40%, 100=1.33% 00:36:33.936 cpu : usr=91.11%, sys=7.03%, ctx=35, majf=0, minf=9 00:36:33.936 IO depths : 1=3.8%, 2=96.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:33.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:33.936 issued rwts: total=1128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:33.936 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:33.936 00:36:33.936 Run status group 0 (all jobs): 00:36:33.936 READ: bw=83.5MiB/s (87.5MB/s), 24.1MiB/s-31.3MiB/s (25.2MB/s-32.8MB/s), io=418MiB (438MB), run=5005-5007msec 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:33.936 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 bdev_null0 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 [2024-05-15 13:54:46.080149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 bdev_null1 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 bdev_null2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:33.937 { 00:36:33.937 "params": { 00:36:33.937 "name": "Nvme$subsystem", 00:36:33.937 "trtype": "$TEST_TRANSPORT", 00:36:33.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.937 "adrfam": "ipv4", 00:36:33.937 "trsvcid": "$NVMF_PORT", 00:36:33.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.937 "hdgst": ${hdgst:-false}, 00:36:33.937 "ddgst": ${ddgst:-false} 00:36:33.937 }, 00:36:33.937 "method": "bdev_nvme_attach_controller" 00:36:33.937 } 00:36:33.937 EOF 00:36:33.937 )") 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:33.937 { 00:36:33.937 "params": { 00:36:33.937 "name": "Nvme$subsystem", 00:36:33.937 "trtype": "$TEST_TRANSPORT", 00:36:33.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.937 "adrfam": "ipv4", 00:36:33.937 "trsvcid": "$NVMF_PORT", 00:36:33.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.937 "hdgst": ${hdgst:-false}, 00:36:33.937 "ddgst": ${ddgst:-false} 00:36:33.937 }, 00:36:33.937 "method": "bdev_nvme_attach_controller" 00:36:33.937 } 00:36:33.937 EOF 00:36:33.937 )") 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:33.937 { 00:36:33.937 "params": { 00:36:33.937 "name": "Nvme$subsystem", 00:36:33.937 "trtype": "$TEST_TRANSPORT", 00:36:33.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:33.937 "adrfam": "ipv4", 00:36:33.937 "trsvcid": "$NVMF_PORT", 00:36:33.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:33.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:33.937 "hdgst": ${hdgst:-false}, 00:36:33.937 "ddgst": ${ddgst:-false} 00:36:33.937 }, 00:36:33.937 "method": "bdev_nvme_attach_controller" 00:36:33.937 } 00:36:33.937 EOF 00:36:33.937 )") 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:33.937 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:33.938 "params": { 00:36:33.938 "name": "Nvme0", 00:36:33.938 "trtype": "tcp", 00:36:33.938 "traddr": "10.0.0.2", 00:36:33.938 "adrfam": "ipv4", 00:36:33.938 "trsvcid": "4420", 00:36:33.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.938 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:33.938 "hdgst": false, 00:36:33.938 "ddgst": false 00:36:33.938 }, 00:36:33.938 "method": "bdev_nvme_attach_controller" 00:36:33.938 },{ 00:36:33.938 "params": { 00:36:33.938 "name": "Nvme1", 00:36:33.938 "trtype": "tcp", 00:36:33.938 "traddr": "10.0.0.2", 00:36:33.938 "adrfam": "ipv4", 00:36:33.938 "trsvcid": "4420", 00:36:33.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:33.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:33.938 "hdgst": false, 00:36:33.938 "ddgst": false 00:36:33.938 }, 00:36:33.938 "method": "bdev_nvme_attach_controller" 00:36:33.938 },{ 00:36:33.938 "params": { 00:36:33.938 "name": "Nvme2", 00:36:33.938 "trtype": "tcp", 00:36:33.938 "traddr": "10.0.0.2", 00:36:33.938 "adrfam": "ipv4", 00:36:33.938 "trsvcid": "4420", 00:36:33.938 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:33.938 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:33.938 "hdgst": false, 00:36:33.938 "ddgst": false 00:36:33.938 }, 00:36:33.938 "method": "bdev_nvme_attach_controller" 00:36:33.938 }' 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:33.938 13:54:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:33.938 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:33.938 ... 00:36:33.938 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:33.938 ... 00:36:33.938 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:33.938 ... 00:36:33.938 fio-3.35 00:36:33.938 Starting 24 threads 00:36:46.148 00:36:46.148 filename0: (groupid=0, jobs=1): err= 0: pid=117541: Wed May 15 13:54:57 2024 00:36:46.148 read: IOPS=214, BW=859KiB/s (880kB/s)(8624KiB/10035msec) 00:36:46.148 slat (usec): min=3, max=8016, avg=19.45, stdev=211.26 00:36:46.148 clat (msec): min=32, max=148, avg=74.23, stdev=21.69 00:36:46.148 lat (msec): min=32, max=148, avg=74.25, stdev=21.69 00:36:46.148 clat percentiles (msec): 00:36:46.148 | 1.00th=[ 39], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:36:46.148 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 79], 00:36:46.148 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 103], 95.00th=[ 121], 00:36:46.148 | 99.00th=[ 133], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 148], 00:36:46.148 | 99.99th=[ 148] 00:36:46.148 bw ( KiB/s): min= 656, max= 1152, per=4.51%, avg=856.05, stdev=142.59, samples=20 00:36:46.148 iops : min= 164, max= 288, avg=214.00, stdev=35.64, samples=20 00:36:46.148 lat (msec) : 50=15.49%, 100=73.47%, 250=11.04% 00:36:46.148 cpu : usr=41.51%, sys=0.96%, ctx=1650, majf=0, minf=9 00:36:46.148 IO depths : 1=0.5%, 2=1.2%, 4=7.5%, 8=77.6%, 16=13.3%, 32=0.0%, >=64=0.0% 00:36:46.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 complete : 0=0.0%, 4=89.5%, 8=6.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.148 filename0: (groupid=0, jobs=1): err= 0: pid=117542: Wed May 15 13:54:57 2024 00:36:46.148 read: IOPS=232, BW=929KiB/s (951kB/s)(9356KiB/10069msec) 00:36:46.148 slat (usec): min=6, max=4042, avg=15.24, stdev=117.68 00:36:46.148 clat (usec): min=1716, max=188136, avg=68783.57, stdev=26476.54 00:36:46.148 lat (usec): min=1726, max=188145, avg=68798.81, stdev=26488.71 00:36:46.148 clat percentiles (msec): 00:36:46.148 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 44], 20.00th=[ 49], 00:36:46.148 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 68], 60.00th=[ 73], 00:36:46.148 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 117], 00:36:46.148 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 188], 99.95th=[ 188], 00:36:46.148 | 99.99th=[ 188] 00:36:46.148 bw ( KiB/s): min= 688, max= 1920, per=4.88%, avg=928.95, stdev=264.15, samples=20 00:36:46.148 iops : min= 172, max= 480, avg=232.20, stdev=66.06, samples=20 00:36:46.148 lat (msec) : 2=0.68%, 10=2.05%, 20=0.68%, 50=19.54%, 100=65.88% 00:36:46.148 lat (msec) : 250=11.16% 00:36:46.148 cpu : usr=45.53%, sys=1.24%, ctx=1651, majf=0, minf=9 00:36:46.148 IO depths : 1=2.0%, 2=4.4%, 4=13.2%, 8=69.5%, 16=10.9%, 32=0.0%, >=64=0.0% 00:36:46.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 issued rwts: total=2339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.148 filename0: (groupid=0, jobs=1): err= 0: pid=117543: Wed May 15 13:54:57 2024 00:36:46.148 read: IOPS=218, BW=875KiB/s (896kB/s)(8776KiB/10025msec) 00:36:46.148 slat (usec): min=5, max=8037, avg=19.64, stdev=242.03 00:36:46.148 clat (msec): min=24, max=146, avg=72.91, stdev=23.55 00:36:46.148 lat (msec): min=24, max=146, avg=72.93, stdev=23.55 00:36:46.148 clat percentiles (msec): 00:36:46.148 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 51], 00:36:46.148 | 30.00th=[ 58], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 77], 00:36:46.148 | 70.00th=[ 83], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 115], 00:36:46.148 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:36:46.148 | 99.99th=[ 146] 00:36:46.148 bw ( KiB/s): min= 608, max= 1120, per=4.60%, avg=874.85, stdev=172.49, samples=20 00:36:46.148 iops : min= 152, max= 280, avg=218.70, stdev=43.11, samples=20 00:36:46.148 lat (msec) : 50=20.10%, 100=64.22%, 250=15.68% 00:36:46.148 cpu : usr=41.63%, sys=0.90%, ctx=1159, majf=0, minf=9 00:36:46.148 IO depths : 1=0.2%, 2=0.5%, 4=7.2%, 8=78.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:36:46.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 complete : 0=0.0%, 4=89.3%, 8=6.3%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 issued rwts: total=2194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.148 filename0: (groupid=0, jobs=1): err= 0: pid=117544: Wed May 15 13:54:57 2024 00:36:46.148 read: IOPS=181, BW=725KiB/s (742kB/s)(7264KiB/10025msec) 00:36:46.148 slat (usec): min=4, max=8036, avg=23.10, stdev=289.51 00:36:46.148 clat (msec): min=32, max=178, avg=88.13, stdev=26.27 00:36:46.148 lat (msec): min=32, max=178, avg=88.15, stdev=26.27 00:36:46.148 clat percentiles (msec): 00:36:46.148 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 68], 00:36:46.148 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 96], 00:36:46.148 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 136], 00:36:46.148 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 180], 00:36:46.148 | 99.99th=[ 180] 00:36:46.148 bw ( KiB/s): min= 496, max= 944, per=3.78%, avg=719.60, stdev=115.65, samples=20 00:36:46.148 iops : min= 124, max= 236, avg=179.90, stdev=28.91, samples=20 00:36:46.148 lat (msec) : 50=8.20%, 100=62.67%, 250=29.13% 00:36:46.148 cpu : usr=32.22%, sys=0.92%, ctx=850, majf=0, minf=9 00:36:46.148 IO depths : 1=1.9%, 2=4.6%, 4=13.3%, 8=69.1%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:46.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.148 filename0: (groupid=0, jobs=1): err= 0: pid=117545: Wed May 15 13:54:57 2024 00:36:46.148 read: IOPS=214, BW=858KiB/s (879kB/s)(8640KiB/10065msec) 00:36:46.148 slat (usec): min=5, max=399, avg=11.38, stdev= 9.63 00:36:46.148 clat (msec): min=3, max=194, avg=74.46, stdev=27.69 00:36:46.148 lat (msec): min=3, max=194, avg=74.47, stdev=27.69 00:36:46.148 clat percentiles (msec): 00:36:46.148 | 1.00th=[ 6], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 53], 00:36:46.148 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:36:46.148 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 121], 00:36:46.148 | 99.00th=[ 157], 99.50th=[ 192], 99.90th=[ 194], 99.95th=[ 194], 00:36:46.148 | 99.99th=[ 194] 00:36:46.148 bw ( KiB/s): min= 512, max= 1584, per=4.51%, avg=857.65, stdev=227.42, samples=20 00:36:46.148 iops : min= 128, max= 396, avg=214.40, stdev=56.87, samples=20 00:36:46.148 lat (msec) : 4=0.74%, 10=1.48%, 20=0.74%, 50=15.83%, 100=66.57% 00:36:46.148 lat (msec) : 250=14.63% 00:36:46.148 cpu : usr=33.53%, sys=0.70%, ctx=939, majf=0, minf=9 00:36:46.148 IO depths : 1=1.5%, 2=3.1%, 4=10.5%, 8=72.7%, 16=12.1%, 32=0.0%, >=64=0.0% 00:36:46.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 complete : 0=0.0%, 4=90.1%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.148 issued rwts: total=2160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.148 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.148 filename0: (groupid=0, jobs=1): err= 0: pid=117546: Wed May 15 13:54:57 2024 00:36:46.148 read: IOPS=214, BW=856KiB/s (877kB/s)(8596KiB/10041msec) 00:36:46.148 slat (usec): min=4, max=8042, avg=15.18, stdev=173.31 00:36:46.148 clat (msec): min=31, max=155, avg=74.57, stdev=21.67 00:36:46.148 lat (msec): min=31, max=155, avg=74.58, stdev=21.67 00:36:46.148 clat percentiles (msec): 00:36:46.148 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 54], 00:36:46.148 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 81], 00:36:46.148 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 114], 00:36:46.148 | 99.00th=[ 133], 99.50th=[ 134], 99.90th=[ 157], 99.95th=[ 157], 00:36:46.148 | 99.99th=[ 157] 00:36:46.149 bw ( KiB/s): min= 640, max= 1080, per=4.49%, avg=853.20, stdev=128.85, samples=20 00:36:46.149 iops : min= 160, max= 270, avg=213.30, stdev=32.21, samples=20 00:36:46.149 lat (msec) : 50=16.89%, 100=71.20%, 250=11.91% 00:36:46.149 cpu : usr=32.45%, sys=0.70%, ctx=846, majf=0, minf=9 00:36:46.149 IO depths : 1=0.7%, 2=1.5%, 4=7.2%, 8=77.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=89.5%, 8=6.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=2149,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename0: (groupid=0, jobs=1): err= 0: pid=117547: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=197, BW=791KiB/s (810kB/s)(7940KiB/10034msec) 00:36:46.149 slat (usec): min=3, max=8030, avg=21.86, stdev=225.26 00:36:46.149 clat (msec): min=36, max=175, avg=80.71, stdev=26.80 00:36:46.149 lat (msec): min=36, max=175, avg=80.73, stdev=26.80 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 56], 00:36:46.149 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 83], 00:36:46.149 | 70.00th=[ 94], 80.00th=[ 104], 90.00th=[ 118], 95.00th=[ 132], 00:36:46.149 | 99.00th=[ 150], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 176], 00:36:46.149 | 99.99th=[ 176] 00:36:46.149 bw ( KiB/s): min= 512, max= 1024, per=4.14%, avg=787.60, stdev=146.10, samples=20 00:36:46.149 iops : min= 128, max= 256, avg=196.90, stdev=36.53, samples=20 00:36:46.149 lat (msec) : 50=11.94%, 100=66.30%, 250=21.76% 00:36:46.149 cpu : usr=40.91%, sys=0.83%, ctx=1469, majf=0, minf=9 00:36:46.149 IO depths : 1=2.6%, 2=5.4%, 4=14.4%, 8=67.5%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=91.2%, 8=3.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=1985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename0: (groupid=0, jobs=1): err= 0: pid=117548: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=236, BW=944KiB/s (967kB/s)(9496KiB/10058msec) 00:36:46.149 slat (usec): min=3, max=8025, avg=15.32, stdev=164.59 00:36:46.149 clat (msec): min=6, max=147, avg=67.65, stdev=23.03 00:36:46.149 lat (msec): min=6, max=147, avg=67.66, stdev=23.02 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 44], 20.00th=[ 48], 00:36:46.149 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 65], 60.00th=[ 72], 00:36:46.149 | 70.00th=[ 79], 80.00th=[ 87], 90.00th=[ 100], 95.00th=[ 112], 00:36:46.149 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 148], 00:36:46.149 | 99.99th=[ 148] 00:36:46.149 bw ( KiB/s): min= 616, max= 1152, per=4.96%, avg=943.20, stdev=188.80, samples=20 00:36:46.149 iops : min= 154, max= 288, avg=235.80, stdev=47.20, samples=20 00:36:46.149 lat (msec) : 10=0.67%, 20=0.67%, 50=23.50%, 100=65.67%, 250=9.48% 00:36:46.149 cpu : usr=43.04%, sys=0.88%, ctx=1383, majf=0, minf=9 00:36:46.149 IO depths : 1=0.5%, 2=1.3%, 4=8.8%, 8=76.5%, 16=12.9%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=2374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename1: (groupid=0, jobs=1): err= 0: pid=117549: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=215, BW=861KiB/s (882kB/s)(8652KiB/10045msec) 00:36:46.149 slat (nsec): min=3928, max=80150, avg=11619.74, stdev=6025.98 00:36:46.149 clat (msec): min=32, max=183, avg=74.22, stdev=24.53 00:36:46.149 lat (msec): min=32, max=183, avg=74.24, stdev=24.53 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:36:46.149 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:36:46.149 | 70.00th=[ 82], 80.00th=[ 90], 90.00th=[ 107], 95.00th=[ 120], 00:36:46.149 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 184], 99.95th=[ 184], 00:36:46.149 | 99.99th=[ 184] 00:36:46.149 bw ( KiB/s): min= 512, max= 1152, per=4.52%, avg=858.80, stdev=173.21, samples=20 00:36:46.149 iops : min= 128, max= 288, avg=214.70, stdev=43.30, samples=20 00:36:46.149 lat (msec) : 50=16.60%, 100=70.23%, 250=13.18% 00:36:46.149 cpu : usr=39.39%, sys=0.84%, ctx=1130, majf=0, minf=9 00:36:46.149 IO depths : 1=1.2%, 2=2.6%, 4=9.7%, 8=74.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=2163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename1: (groupid=0, jobs=1): err= 0: pid=117550: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=184, BW=737KiB/s (755kB/s)(7392KiB/10027msec) 00:36:46.149 slat (usec): min=4, max=8042, avg=34.22, stdev=395.35 00:36:46.149 clat (msec): min=36, max=165, avg=86.55, stdev=24.40 00:36:46.149 lat (msec): min=36, max=165, avg=86.59, stdev=24.39 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 45], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 70], 00:36:46.149 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 88], 00:36:46.149 | 70.00th=[ 96], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 129], 00:36:46.149 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 165], 99.95th=[ 165], 00:36:46.149 | 99.99th=[ 165] 00:36:46.149 bw ( KiB/s): min= 512, max= 808, per=3.85%, avg=732.85, stdev=69.69, samples=20 00:36:46.149 iops : min= 128, max= 202, avg=183.20, stdev=17.42, samples=20 00:36:46.149 lat (msec) : 50=4.55%, 100=72.46%, 250=23.00% 00:36:46.149 cpu : usr=36.06%, sys=0.77%, ctx=1013, majf=0, minf=9 00:36:46.149 IO depths : 1=2.1%, 2=5.0%, 4=15.6%, 8=66.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=91.6%, 8=3.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename1: (groupid=0, jobs=1): err= 0: pid=117551: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=204, BW=817KiB/s (836kB/s)(8192KiB/10031msec) 00:36:46.149 slat (nsec): min=5683, max=55308, avg=11299.43, stdev=5387.21 00:36:46.149 clat (msec): min=35, max=187, avg=78.30, stdev=25.07 00:36:46.149 lat (msec): min=35, max=187, avg=78.31, stdev=25.07 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:36:46.149 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:36:46.149 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 110], 95.00th=[ 124], 00:36:46.149 | 99.00th=[ 155], 99.50th=[ 155], 99.90th=[ 188], 99.95th=[ 188], 00:36:46.149 | 99.99th=[ 188] 00:36:46.149 bw ( KiB/s): min= 512, max= 1024, per=4.27%, avg=812.30, stdev=160.53, samples=20 00:36:46.149 iops : min= 128, max= 256, avg=203.00, stdev=40.11, samples=20 00:36:46.149 lat (msec) : 50=16.94%, 100=65.14%, 250=17.92% 00:36:46.149 cpu : usr=32.52%, sys=0.61%, ctx=847, majf=0, minf=9 00:36:46.149 IO depths : 1=1.0%, 2=2.1%, 4=9.2%, 8=75.0%, 16=12.7%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=2048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename1: (groupid=0, jobs=1): err= 0: pid=117552: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=174, BW=697KiB/s (713kB/s)(6976KiB/10012msec) 00:36:46.149 slat (usec): min=4, max=8019, avg=25.58, stdev=277.35 00:36:46.149 clat (msec): min=42, max=170, avg=91.60, stdev=24.05 00:36:46.149 lat (msec): min=42, max=170, avg=91.62, stdev=24.06 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 67], 20.00th=[ 72], 00:36:46.149 | 30.00th=[ 77], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 92], 00:36:46.149 | 70.00th=[ 105], 80.00th=[ 115], 90.00th=[ 124], 95.00th=[ 132], 00:36:46.149 | 99.00th=[ 171], 99.50th=[ 171], 99.90th=[ 171], 99.95th=[ 171], 00:36:46.149 | 99.99th=[ 171] 00:36:46.149 bw ( KiB/s): min= 512, max= 896, per=3.64%, avg=691.25, stdev=106.18, samples=20 00:36:46.149 iops : min= 128, max= 224, avg=172.80, stdev=26.55, samples=20 00:36:46.149 lat (msec) : 50=1.49%, 100=66.74%, 250=31.77% 00:36:46.149 cpu : usr=43.27%, sys=0.86%, ctx=1469, majf=0, minf=9 00:36:46.149 IO depths : 1=3.3%, 2=8.0%, 4=20.5%, 8=58.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=1744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename1: (groupid=0, jobs=1): err= 0: pid=117553: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=193, BW=773KiB/s (791kB/s)(7744KiB/10020msec) 00:36:46.149 slat (usec): min=4, max=8035, avg=23.74, stdev=284.76 00:36:46.149 clat (msec): min=33, max=177, avg=82.66, stdev=24.61 00:36:46.149 lat (msec): min=33, max=177, avg=82.69, stdev=24.61 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 61], 00:36:46.149 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 87], 00:36:46.149 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 117], 95.00th=[ 130], 00:36:46.149 | 99.00th=[ 144], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:36:46.149 | 99.99th=[ 178] 00:36:46.149 bw ( KiB/s): min= 520, max= 1072, per=4.04%, avg=768.05, stdev=126.01, samples=20 00:36:46.149 iops : min= 130, max= 268, avg=192.00, stdev=31.50, samples=20 00:36:46.149 lat (msec) : 50=12.86%, 100=65.50%, 250=21.64% 00:36:46.149 cpu : usr=38.60%, sys=0.94%, ctx=1328, majf=0, minf=9 00:36:46.149 IO depths : 1=1.4%, 2=3.0%, 4=11.4%, 8=72.2%, 16=12.0%, 32=0.0%, >=64=0.0% 00:36:46.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.149 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.149 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.149 filename1: (groupid=0, jobs=1): err= 0: pid=117554: Wed May 15 13:54:57 2024 00:36:46.149 read: IOPS=192, BW=769KiB/s (788kB/s)(7720KiB/10033msec) 00:36:46.149 slat (usec): min=4, max=8031, avg=16.18, stdev=182.61 00:36:46.149 clat (msec): min=35, max=175, avg=82.99, stdev=25.60 00:36:46.149 lat (msec): min=35, max=175, avg=83.01, stdev=25.60 00:36:46.149 clat percentiles (msec): 00:36:46.149 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 49], 20.00th=[ 61], 00:36:46.149 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 82], 60.00th=[ 85], 00:36:46.149 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 129], 00:36:46.150 | 99.00th=[ 155], 99.50th=[ 169], 99.90th=[ 176], 99.95th=[ 176], 00:36:46.150 | 99.99th=[ 176] 00:36:46.150 bw ( KiB/s): min= 464, max= 992, per=4.04%, avg=768.00, stdev=145.51, samples=20 00:36:46.150 iops : min= 116, max= 248, avg=192.00, stdev=36.38, samples=20 00:36:46.150 lat (msec) : 50=11.61%, 100=66.58%, 250=21.81% 00:36:46.150 cpu : usr=34.50%, sys=0.68%, ctx=931, majf=0, minf=9 00:36:46.150 IO depths : 1=0.5%, 2=1.0%, 4=6.7%, 8=77.6%, 16=14.1%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=89.3%, 8=7.2%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=1930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename1: (groupid=0, jobs=1): err= 0: pid=117555: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=171, BW=685KiB/s (702kB/s)(6868KiB/10021msec) 00:36:46.150 slat (usec): min=5, max=8059, avg=26.74, stdev=335.47 00:36:46.150 clat (msec): min=22, max=173, avg=93.21, stdev=26.32 00:36:46.150 lat (msec): min=22, max=173, avg=93.24, stdev=26.31 00:36:46.150 clat percentiles (msec): 00:36:46.150 | 1.00th=[ 48], 5.00th=[ 50], 10.00th=[ 67], 20.00th=[ 72], 00:36:46.150 | 30.00th=[ 74], 40.00th=[ 84], 50.00th=[ 89], 60.00th=[ 97], 00:36:46.150 | 70.00th=[ 107], 80.00th=[ 117], 90.00th=[ 128], 95.00th=[ 144], 00:36:46.150 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 174], 99.95th=[ 174], 00:36:46.150 | 99.99th=[ 174] 00:36:46.150 bw ( KiB/s): min= 512, max= 972, per=3.58%, avg=680.25, stdev=122.35, samples=20 00:36:46.150 iops : min= 128, max= 243, avg=170.05, stdev=30.59, samples=20 00:36:46.150 lat (msec) : 50=5.53%, 100=58.88%, 250=35.59% 00:36:46.150 cpu : usr=36.93%, sys=0.82%, ctx=1022, majf=0, minf=9 00:36:46.150 IO depths : 1=3.4%, 2=7.1%, 4=18.2%, 8=62.0%, 16=9.3%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=91.9%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=1717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename1: (groupid=0, jobs=1): err= 0: pid=117556: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=176, BW=704KiB/s (721kB/s)(7052KiB/10017msec) 00:36:46.150 slat (usec): min=5, max=8021, avg=19.00, stdev=213.50 00:36:46.150 clat (msec): min=20, max=201, avg=90.80, stdev=28.44 00:36:46.150 lat (msec): min=20, max=201, avg=90.82, stdev=28.45 00:36:46.150 clat percentiles (msec): 00:36:46.150 | 1.00th=[ 46], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 70], 00:36:46.150 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 96], 00:36:46.150 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 129], 95.00th=[ 150], 00:36:46.150 | 99.00th=[ 167], 99.50th=[ 167], 99.90th=[ 203], 99.95th=[ 203], 00:36:46.150 | 99.99th=[ 203] 00:36:46.150 bw ( KiB/s): min= 508, max= 1000, per=3.67%, avg=698.45, stdev=139.92, samples=20 00:36:46.150 iops : min= 127, max= 250, avg=174.60, stdev=34.98, samples=20 00:36:46.150 lat (msec) : 50=6.52%, 100=61.20%, 250=32.27% 00:36:46.150 cpu : usr=33.35%, sys=0.64%, ctx=913, majf=0, minf=9 00:36:46.150 IO depths : 1=2.4%, 2=5.3%, 4=14.6%, 8=66.8%, 16=10.8%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=1763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename2: (groupid=0, jobs=1): err= 0: pid=117557: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=173, BW=694KiB/s (711kB/s)(6960KiB/10028msec) 00:36:46.150 slat (usec): min=4, max=8019, avg=25.41, stdev=249.64 00:36:46.150 clat (msec): min=30, max=176, avg=92.00, stdev=26.50 00:36:46.150 lat (msec): min=30, max=176, avg=92.02, stdev=26.51 00:36:46.150 clat percentiles (msec): 00:36:46.150 | 1.00th=[ 47], 5.00th=[ 58], 10.00th=[ 67], 20.00th=[ 71], 00:36:46.150 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 96], 00:36:46.150 | 70.00th=[ 104], 80.00th=[ 112], 90.00th=[ 130], 95.00th=[ 144], 00:36:46.150 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 178], 99.95th=[ 178], 00:36:46.150 | 99.99th=[ 178] 00:36:46.150 bw ( KiB/s): min= 512, max= 896, per=3.63%, avg=689.60, stdev=112.62, samples=20 00:36:46.150 iops : min= 128, max= 224, avg=172.40, stdev=28.15, samples=20 00:36:46.150 lat (msec) : 50=2.64%, 100=62.47%, 250=34.89% 00:36:46.150 cpu : usr=39.62%, sys=0.78%, ctx=1314, majf=0, minf=9 00:36:46.150 IO depths : 1=2.0%, 2=4.5%, 4=13.5%, 8=68.2%, 16=11.9%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=91.0%, 8=4.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=1740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename2: (groupid=0, jobs=1): err= 0: pid=117558: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=179, BW=717KiB/s (735kB/s)(7188KiB/10019msec) 00:36:46.150 slat (usec): min=4, max=8037, avg=16.11, stdev=189.41 00:36:46.150 clat (msec): min=35, max=203, avg=89.09, stdev=27.95 00:36:46.150 lat (msec): min=35, max=203, avg=89.10, stdev=27.95 00:36:46.150 clat percentiles (msec): 00:36:46.150 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 71], 00:36:46.150 | 30.00th=[ 72], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 96], 00:36:46.150 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 144], 00:36:46.150 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 205], 99.95th=[ 205], 00:36:46.150 | 99.99th=[ 205] 00:36:46.150 bw ( KiB/s): min= 512, max= 1072, per=3.75%, avg=712.45, stdev=138.82, samples=20 00:36:46.150 iops : min= 128, max= 268, avg=178.10, stdev=34.70, samples=20 00:36:46.150 lat (msec) : 50=8.96%, 100=61.88%, 250=29.16% 00:36:46.150 cpu : usr=32.38%, sys=0.76%, ctx=844, majf=0, minf=9 00:36:46.150 IO depths : 1=1.8%, 2=4.2%, 4=13.3%, 8=69.6%, 16=11.1%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=90.8%, 8=4.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=1797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename2: (groupid=0, jobs=1): err= 0: pid=117559: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=209, BW=838KiB/s (858kB/s)(8404KiB/10029msec) 00:36:46.150 slat (usec): min=3, max=8038, avg=15.10, stdev=175.20 00:36:46.150 clat (msec): min=34, max=167, avg=76.27, stdev=25.08 00:36:46.150 lat (msec): min=34, max=167, avg=76.28, stdev=25.09 00:36:46.150 clat percentiles (msec): 00:36:46.150 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:36:46.150 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 80], 00:36:46.150 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 122], 00:36:46.150 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:36:46.150 | 99.99th=[ 169] 00:36:46.150 bw ( KiB/s): min= 592, max= 1120, per=4.38%, avg=833.75, stdev=155.34, samples=20 00:36:46.150 iops : min= 148, max= 280, avg=208.40, stdev=38.81, samples=20 00:36:46.150 lat (msec) : 50=17.18%, 100=67.59%, 250=15.23% 00:36:46.150 cpu : usr=32.34%, sys=0.79%, ctx=844, majf=0, minf=9 00:36:46.150 IO depths : 1=1.1%, 2=2.3%, 4=9.0%, 8=75.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=89.8%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=2101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename2: (groupid=0, jobs=1): err= 0: pid=117560: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=203, BW=813KiB/s (833kB/s)(8176KiB/10053msec) 00:36:46.150 slat (usec): min=4, max=8029, avg=19.91, stdev=250.30 00:36:46.150 clat (msec): min=15, max=209, avg=78.49, stdev=24.18 00:36:46.150 lat (msec): min=15, max=209, avg=78.51, stdev=24.18 00:36:46.150 clat percentiles (msec): 00:36:46.150 | 1.00th=[ 35], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:36:46.150 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:36:46.150 | 70.00th=[ 93], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 121], 00:36:46.150 | 99.00th=[ 138], 99.50th=[ 157], 99.90th=[ 209], 99.95th=[ 209], 00:36:46.150 | 99.99th=[ 209] 00:36:46.150 bw ( KiB/s): min= 640, max= 1024, per=4.26%, avg=810.45, stdev=122.45, samples=20 00:36:46.150 iops : min= 160, max= 256, avg=202.55, stdev=30.56, samples=20 00:36:46.150 lat (msec) : 20=0.78%, 50=13.85%, 100=68.69%, 250=16.68% 00:36:46.150 cpu : usr=32.56%, sys=0.58%, ctx=876, majf=0, minf=9 00:36:46.150 IO depths : 1=1.4%, 2=3.1%, 4=10.8%, 8=72.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=90.3%, 8=4.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=2044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename2: (groupid=0, jobs=1): err= 0: pid=117561: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=188, BW=754KiB/s (772kB/s)(7556KiB/10019msec) 00:36:46.150 slat (usec): min=4, max=4538, avg=15.66, stdev=104.39 00:36:46.150 clat (msec): min=32, max=188, avg=84.73, stdev=25.88 00:36:46.150 lat (msec): min=32, max=188, avg=84.75, stdev=25.87 00:36:46.150 clat percentiles (msec): 00:36:46.150 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 65], 00:36:46.150 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:36:46.150 | 70.00th=[ 94], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 138], 00:36:46.150 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 188], 99.95th=[ 188], 00:36:46.150 | 99.99th=[ 188] 00:36:46.150 bw ( KiB/s): min= 512, max= 1024, per=3.94%, avg=749.25, stdev=150.40, samples=20 00:36:46.150 iops : min= 128, max= 256, avg=187.30, stdev=37.60, samples=20 00:36:46.150 lat (msec) : 50=7.89%, 100=66.23%, 250=25.89% 00:36:46.150 cpu : usr=41.24%, sys=0.84%, ctx=1152, majf=0, minf=9 00:36:46.150 IO depths : 1=2.5%, 2=5.2%, 4=14.0%, 8=67.7%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:46.150 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 complete : 0=0.0%, 4=90.8%, 8=4.2%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.150 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.150 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.150 filename2: (groupid=0, jobs=1): err= 0: pid=117562: Wed May 15 13:54:57 2024 00:36:46.150 read: IOPS=190, BW=761KiB/s (779kB/s)(7628KiB/10023msec) 00:36:46.150 slat (usec): min=3, max=4026, avg=21.06, stdev=159.00 00:36:46.150 clat (msec): min=23, max=198, avg=83.95, stdev=25.39 00:36:46.150 lat (msec): min=23, max=198, avg=83.97, stdev=25.39 00:36:46.150 clat percentiles (msec): 00:36:46.151 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 55], 20.00th=[ 64], 00:36:46.151 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 80], 60.00th=[ 86], 00:36:46.151 | 70.00th=[ 94], 80.00th=[ 107], 90.00th=[ 118], 95.00th=[ 133], 00:36:46.151 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 199], 99.95th=[ 199], 00:36:46.151 | 99.99th=[ 199] 00:36:46.151 bw ( KiB/s): min= 472, max= 944, per=3.98%, avg=756.20, stdev=139.65, samples=20 00:36:46.151 iops : min= 118, max= 236, avg=189.05, stdev=34.91, samples=20 00:36:46.151 lat (msec) : 50=7.18%, 100=67.65%, 250=25.17% 00:36:46.151 cpu : usr=43.97%, sys=1.01%, ctx=1291, majf=0, minf=9 00:36:46.151 IO depths : 1=2.0%, 2=4.9%, 4=14.5%, 8=67.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:36:46.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.151 complete : 0=0.0%, 4=91.3%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.151 issued rwts: total=1907,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.151 filename2: (groupid=0, jobs=1): err= 0: pid=117563: Wed May 15 13:54:57 2024 00:36:46.151 read: IOPS=177, BW=710KiB/s (727kB/s)(7116KiB/10020msec) 00:36:46.151 slat (usec): min=4, max=8042, avg=20.71, stdev=213.10 00:36:46.151 clat (msec): min=39, max=177, avg=89.91, stdev=23.73 00:36:46.151 lat (msec): min=39, max=177, avg=89.93, stdev=23.72 00:36:46.151 clat percentiles (msec): 00:36:46.151 | 1.00th=[ 48], 5.00th=[ 58], 10.00th=[ 62], 20.00th=[ 72], 00:36:46.151 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 94], 00:36:46.151 | 70.00th=[ 103], 80.00th=[ 113], 90.00th=[ 126], 95.00th=[ 130], 00:36:46.151 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 178], 00:36:46.151 | 99.99th=[ 178] 00:36:46.151 bw ( KiB/s): min= 512, max= 896, per=3.71%, avg=705.20, stdev=116.06, samples=20 00:36:46.151 iops : min= 128, max= 224, avg=176.30, stdev=29.01, samples=20 00:36:46.151 lat (msec) : 50=1.63%, 100=67.73%, 250=30.64% 00:36:46.151 cpu : usr=37.81%, sys=1.08%, ctx=1036, majf=0, minf=9 00:36:46.151 IO depths : 1=2.7%, 2=6.1%, 4=17.0%, 8=64.0%, 16=10.2%, 32=0.0%, >=64=0.0% 00:36:46.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.151 complete : 0=0.0%, 4=91.7%, 8=3.1%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.151 issued rwts: total=1779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.151 filename2: (groupid=0, jobs=1): err= 0: pid=117564: Wed May 15 13:54:57 2024 00:36:46.151 read: IOPS=224, BW=897KiB/s (918kB/s)(8996KiB/10034msec) 00:36:46.151 slat (usec): min=4, max=7287, avg=21.59, stdev=195.23 00:36:46.151 clat (msec): min=27, max=160, avg=71.14, stdev=23.52 00:36:46.151 lat (msec): min=27, max=160, avg=71.16, stdev=23.53 00:36:46.151 clat percentiles (msec): 00:36:46.151 | 1.00th=[ 39], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 50], 00:36:46.151 | 30.00th=[ 55], 40.00th=[ 61], 50.00th=[ 68], 60.00th=[ 72], 00:36:46.151 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 118], 00:36:46.151 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 161], 00:36:46.151 | 99.99th=[ 161] 00:36:46.151 bw ( KiB/s): min= 512, max= 1200, per=4.70%, avg=893.20, stdev=173.05, samples=20 00:36:46.151 iops : min= 128, max= 300, avg=223.30, stdev=43.26, samples=20 00:36:46.151 lat (msec) : 50=20.63%, 100=66.47%, 250=12.89% 00:36:46.151 cpu : usr=44.18%, sys=0.83%, ctx=1373, majf=0, minf=9 00:36:46.151 IO depths : 1=0.9%, 2=2.3%, 4=9.2%, 8=75.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:46.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.151 complete : 0=0.0%, 4=89.8%, 8=5.5%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.151 issued rwts: total=2249,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.151 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:46.151 00:36:46.151 Run status group 0 (all jobs): 00:36:46.151 READ: bw=18.6MiB/s (19.5MB/s), 685KiB/s-944KiB/s (702kB/s-967kB/s), io=187MiB (196MB), run=10012-10069msec 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 bdev_null0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 [2024-05-15 13:54:57.544447] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 bdev_null1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.151 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:46.152 { 00:36:46.152 "params": { 00:36:46.152 "name": "Nvme$subsystem", 00:36:46.152 "trtype": "$TEST_TRANSPORT", 00:36:46.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:46.152 "adrfam": "ipv4", 00:36:46.152 "trsvcid": "$NVMF_PORT", 00:36:46.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:46.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:46.152 "hdgst": ${hdgst:-false}, 00:36:46.152 "ddgst": ${ddgst:-false} 00:36:46.152 }, 00:36:46.152 "method": "bdev_nvme_attach_controller" 00:36:46.152 } 00:36:46.152 EOF 00:36:46.152 )") 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:46.152 { 00:36:46.152 "params": { 00:36:46.152 "name": "Nvme$subsystem", 00:36:46.152 "trtype": "$TEST_TRANSPORT", 00:36:46.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:46.152 "adrfam": "ipv4", 00:36:46.152 "trsvcid": "$NVMF_PORT", 00:36:46.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:46.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:46.152 "hdgst": ${hdgst:-false}, 00:36:46.152 "ddgst": ${ddgst:-false} 00:36:46.152 }, 00:36:46.152 "method": "bdev_nvme_attach_controller" 00:36:46.152 } 00:36:46.152 EOF 00:36:46.152 )") 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:46.152 "params": { 00:36:46.152 "name": "Nvme0", 00:36:46.152 "trtype": "tcp", 00:36:46.152 "traddr": "10.0.0.2", 00:36:46.152 "adrfam": "ipv4", 00:36:46.152 "trsvcid": "4420", 00:36:46.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:46.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:46.152 "hdgst": false, 00:36:46.152 "ddgst": false 00:36:46.152 }, 00:36:46.152 "method": "bdev_nvme_attach_controller" 00:36:46.152 },{ 00:36:46.152 "params": { 00:36:46.152 "name": "Nvme1", 00:36:46.152 "trtype": "tcp", 00:36:46.152 "traddr": "10.0.0.2", 00:36:46.152 "adrfam": "ipv4", 00:36:46.152 "trsvcid": "4420", 00:36:46.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:46.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:46.152 "hdgst": false, 00:36:46.152 "ddgst": false 00:36:46.152 }, 00:36:46.152 "method": "bdev_nvme_attach_controller" 00:36:46.152 }' 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:46.152 13:54:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.152 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:46.152 ... 00:36:46.152 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:46.152 ... 00:36:46.152 fio-3.35 00:36:46.152 Starting 4 threads 00:36:51.415 00:36:51.415 filename0: (groupid=0, jobs=1): err= 0: pid=117691: Wed May 15 13:55:03 2024 00:36:51.415 read: IOPS=1879, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5001msec) 00:36:51.415 slat (usec): min=4, max=102, avg=23.22, stdev=12.49 00:36:51.415 clat (usec): min=1125, max=6187, avg=4136.20, stdev=176.91 00:36:51.415 lat (usec): min=1133, max=6206, avg=4159.42, stdev=177.54 00:36:51.415 clat percentiles (usec): 00:36:51.415 | 1.00th=[ 3851], 5.00th=[ 3916], 10.00th=[ 3982], 20.00th=[ 4015], 00:36:51.415 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:36:51.415 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:36:51.415 | 99.00th=[ 4490], 99.50th=[ 4555], 99.90th=[ 5145], 99.95th=[ 6194], 00:36:51.415 | 99.99th=[ 6194] 00:36:51.415 bw ( KiB/s): min=14592, max=15488, per=25.02%, avg=15050.44, stdev=257.91, samples=9 00:36:51.415 iops : min= 1824, max= 1936, avg=1881.22, stdev=32.21, samples=9 00:36:51.415 lat (msec) : 2=0.09%, 4=15.63%, 10=84.29% 00:36:51.415 cpu : usr=95.60%, sys=3.12%, ctx=7, majf=0, minf=9 00:36:51.415 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 issued rwts: total=9400,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.415 filename0: (groupid=0, jobs=1): err= 0: pid=117692: Wed May 15 13:55:03 2024 00:36:51.415 read: IOPS=1883, BW=14.7MiB/s (15.4MB/s)(73.6MiB/5002msec) 00:36:51.415 slat (nsec): min=6602, max=67426, avg=12965.49, stdev=9303.78 00:36:51.415 clat (usec): min=1332, max=4895, avg=4182.67, stdev=191.25 00:36:51.415 lat (usec): min=1349, max=4919, avg=4195.64, stdev=190.33 00:36:51.415 clat percentiles (usec): 00:36:51.415 | 1.00th=[ 3851], 5.00th=[ 4015], 10.00th=[ 4047], 20.00th=[ 4080], 00:36:51.415 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:51.415 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4424], 00:36:51.415 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 4752], 99.95th=[ 4817], 00:36:51.415 | 99.99th=[ 4883] 00:36:51.415 bw ( KiB/s): min=14720, max=15488, per=25.09%, avg=15095.11, stdev=280.24, samples=9 00:36:51.415 iops : min= 1840, max= 1936, avg=1886.89, stdev=35.03, samples=9 00:36:51.415 lat (msec) : 2=0.20%, 4=3.88%, 10=95.92% 00:36:51.415 cpu : usr=94.08%, sys=4.56%, ctx=11, majf=0, minf=0 00:36:51.415 IO depths : 1=10.8%, 2=24.4%, 4=50.6%, 8=14.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 issued rwts: total=9419,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.415 filename1: (groupid=0, jobs=1): err= 0: pid=117693: Wed May 15 13:55:03 2024 00:36:51.415 read: IOPS=1881, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5003msec) 00:36:51.415 slat (nsec): min=4082, max=89633, avg=17483.56, stdev=9455.52 00:36:51.415 clat (usec): min=1243, max=6399, avg=4174.46, stdev=169.56 00:36:51.415 lat (usec): min=1251, max=6413, avg=4191.94, stdev=168.78 00:36:51.415 clat percentiles (usec): 00:36:51.415 | 1.00th=[ 3884], 5.00th=[ 3982], 10.00th=[ 4015], 20.00th=[ 4080], 00:36:51.415 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:36:51.415 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4359], 95.00th=[ 4424], 00:36:51.415 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5211], 99.95th=[ 5407], 00:36:51.415 | 99.99th=[ 6390] 00:36:51.415 bw ( KiB/s): min=14720, max=15360, per=25.07%, avg=15080.89, stdev=249.30, samples=9 00:36:51.415 iops : min= 1840, max= 1920, avg=1885.11, stdev=31.16, samples=9 00:36:51.415 lat (msec) : 2=0.03%, 4=6.82%, 10=93.15% 00:36:51.415 cpu : usr=94.46%, sys=4.02%, ctx=3, majf=0, minf=9 00:36:51.415 IO depths : 1=11.8%, 2=24.0%, 4=51.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 issued rwts: total=9411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.415 filename1: (groupid=0, jobs=1): err= 0: pid=117694: Wed May 15 13:55:03 2024 00:36:51.415 read: IOPS=1878, BW=14.7MiB/s (15.4MB/s)(73.4MiB/5001msec) 00:36:51.415 slat (nsec): min=5535, max=95928, avg=22769.01, stdev=12781.96 00:36:51.415 clat (usec): min=2386, max=7258, avg=4138.45, stdev=175.31 00:36:51.415 lat (usec): min=2397, max=7265, avg=4161.22, stdev=175.99 00:36:51.415 clat percentiles (usec): 00:36:51.415 | 1.00th=[ 3851], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 4015], 00:36:51.415 | 30.00th=[ 4047], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:36:51.415 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4359], 00:36:51.415 | 99.00th=[ 4490], 99.50th=[ 4621], 99.90th=[ 6325], 99.95th=[ 6783], 00:36:51.415 | 99.99th=[ 7242] 00:36:51.415 bw ( KiB/s): min=14592, max=15488, per=25.01%, avg=15047.11, stdev=256.89, samples=9 00:36:51.415 iops : min= 1824, max= 1936, avg=1880.89, stdev=32.11, samples=9 00:36:51.415 lat (msec) : 4=16.21%, 10=83.79% 00:36:51.415 cpu : usr=95.38%, sys=3.28%, ctx=82, majf=0, minf=10 00:36:51.415 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:51.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:51.415 issued rwts: total=9392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:51.415 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:51.415 00:36:51.415 Run status group 0 (all jobs): 00:36:51.415 READ: bw=58.7MiB/s (61.6MB/s), 14.7MiB/s-14.7MiB/s (15.4MB/s-15.4MB/s), io=294MiB (308MB), run=5001-5003msec 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.415 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.416 13:55:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:51.416 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.416 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.416 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.416 00:36:51.416 real 0m23.667s 00:36:51.416 user 2m6.495s 00:36:51.416 sys 0m4.628s 00:36:51.416 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:51.416 13:55:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:51.416 ************************************ 00:36:51.416 END TEST fio_dif_rand_params 00:36:51.416 ************************************ 00:36:51.416 13:55:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:51.416 13:55:03 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:36:51.416 13:55:03 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:51.416 13:55:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:51.416 ************************************ 00:36:51.416 START TEST fio_dif_digest 00:36:51.416 ************************************ 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.416 bdev_null0 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:51.416 [2024-05-15 13:55:03.817836] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:51.416 { 00:36:51.416 "params": { 00:36:51.416 "name": "Nvme$subsystem", 00:36:51.416 "trtype": "$TEST_TRANSPORT", 00:36:51.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:51.416 "adrfam": "ipv4", 00:36:51.416 "trsvcid": "$NVMF_PORT", 00:36:51.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:51.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:51.416 "hdgst": ${hdgst:-false}, 00:36:51.416 "ddgst": ${ddgst:-false} 00:36:51.416 }, 00:36:51.416 "method": "bdev_nvme_attach_controller" 00:36:51.416 } 00:36:51.416 EOF 00:36:51.416 )") 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:51.416 "params": { 00:36:51.416 "name": "Nvme0", 00:36:51.416 "trtype": "tcp", 00:36:51.416 "traddr": "10.0.0.2", 00:36:51.416 "adrfam": "ipv4", 00:36:51.416 "trsvcid": "4420", 00:36:51.416 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.416 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.416 "hdgst": true, 00:36:51.416 "ddgst": true 00:36:51.416 }, 00:36:51.416 "method": "bdev_nvme_attach_controller" 00:36:51.416 }' 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:51.416 13:55:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:51.416 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:51.416 ... 00:36:51.416 fio-3.35 00:36:51.416 Starting 3 threads 00:37:03.641 00:37:03.641 filename0: (groupid=0, jobs=1): err= 0: pid=117796: Wed May 15 13:55:14 2024 00:37:03.641 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10049msec) 00:37:03.641 slat (nsec): min=7484, max=52998, avg=13960.28, stdev=6822.65 00:37:03.641 clat (usec): min=9056, max=52736, avg=16168.26, stdev=2433.38 00:37:03.641 lat (usec): min=9068, max=52760, avg=16182.22, stdev=2433.90 00:37:03.641 clat percentiles (usec): 00:37:03.641 | 1.00th=[ 9896], 5.00th=[10421], 10.00th=[11469], 20.00th=[15795], 00:37:03.641 | 30.00th=[16188], 40.00th=[16450], 50.00th=[16712], 60.00th=[16909], 00:37:03.641 | 70.00th=[17171], 80.00th=[17433], 90.00th=[17957], 95.00th=[18220], 00:37:03.641 | 99.00th=[18744], 99.50th=[19006], 99.90th=[49021], 99.95th=[52691], 00:37:03.641 | 99.99th=[52691] 00:37:03.641 bw ( KiB/s): min=22272, max=26112, per=30.08%, avg=23700.21, stdev=1167.90, samples=19 00:37:03.641 iops : min= 174, max= 204, avg=185.16, stdev= 9.12, samples=19 00:37:03.641 lat (msec) : 10=1.45%, 20=98.44%, 50=0.05%, 100=0.05% 00:37:03.641 cpu : usr=94.47%, sys=4.16%, ctx=5, majf=0, minf=9 00:37:03.641 IO depths : 1=27.3%, 2=72.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.641 issued rwts: total=1858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.641 filename0: (groupid=0, jobs=1): err= 0: pid=117797: Wed May 15 13:55:14 2024 00:37:03.641 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(265MiB/10008msec) 00:37:03.641 slat (nsec): min=8200, max=89918, avg=18384.86, stdev=6853.96 00:37:03.641 clat (usec): min=6295, max=17787, avg=14117.08, stdev=2174.81 00:37:03.641 lat (usec): min=6314, max=17802, avg=14135.47, stdev=2175.83 00:37:03.641 clat percentiles (usec): 00:37:03.641 | 1.00th=[ 7963], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[13173], 00:37:03.641 | 30.00th=[13698], 40.00th=[14222], 50.00th=[14615], 60.00th=[15008], 00:37:03.641 | 70.00th=[15270], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:37:03.641 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17695], 99.95th=[17695], 00:37:03.641 | 99.99th=[17695] 00:37:03.641 bw ( KiB/s): min=24015, max=29184, per=34.37%, avg=27079.53, stdev=1244.97, samples=19 00:37:03.641 iops : min= 187, max= 228, avg=211.53, stdev= 9.81, samples=19 00:37:03.641 lat (msec) : 10=10.65%, 20=89.35% 00:37:03.641 cpu : usr=94.69%, sys=3.83%, ctx=35, majf=0, minf=0 00:37:03.641 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.641 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.641 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.642 filename0: (groupid=0, jobs=1): err= 0: pid=117798: Wed May 15 13:55:14 2024 00:37:03.642 read: IOPS=220, BW=27.5MiB/s (28.9MB/s)(276MiB/10010msec) 00:37:03.642 slat (nsec): min=7318, max=93078, avg=18650.64, stdev=7405.93 00:37:03.642 clat (usec): min=9479, max=93983, avg=13594.30, stdev=6859.49 00:37:03.642 lat (usec): min=9500, max=94013, avg=13612.95, stdev=6859.48 00:37:03.642 clat percentiles (usec): 00:37:03.642 | 1.00th=[10159], 5.00th=[10814], 10.00th=[11338], 20.00th=[11731], 00:37:03.642 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:37:03.642 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14091], 00:37:03.642 | 99.00th=[53216], 99.50th=[53740], 99.90th=[55837], 99.95th=[93848], 00:37:03.642 | 99.99th=[93848] 00:37:03.642 bw ( KiB/s): min=23808, max=30720, per=35.99%, avg=28362.11, stdev=1982.19, samples=19 00:37:03.642 iops : min= 186, max= 240, avg=221.58, stdev=15.49, samples=19 00:37:03.642 lat (msec) : 10=0.59%, 20=96.78%, 100=2.63% 00:37:03.642 cpu : usr=93.43%, sys=4.88%, ctx=15, majf=0, minf=0 00:37:03.642 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:03.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:03.642 issued rwts: total=2205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:03.642 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:03.642 00:37:03.642 Run status group 0 (all jobs): 00:37:03.642 READ: bw=76.9MiB/s (80.7MB/s), 23.1MiB/s-27.5MiB/s (24.2MB/s-28.9MB/s), io=773MiB (811MB), run=10008-10049msec 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:03.642 00:37:03.642 real 0m11.124s 00:37:03.642 user 0m29.020s 00:37:03.642 sys 0m1.625s 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:03.642 ************************************ 00:37:03.642 13:55:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:03.642 END TEST fio_dif_digest 00:37:03.642 ************************************ 00:37:03.642 13:55:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:03.642 13:55:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:03.642 13:55:14 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:03.642 13:55:14 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:37:03.642 13:55:14 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:03.642 13:55:14 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:37:03.642 13:55:14 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:03.642 13:55:14 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:03.642 rmmod nvme_tcp 00:37:03.642 rmmod nvme_fabrics 00:37:03.642 rmmod nvme_keyring 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 117052 ']' 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 117052 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 117052 ']' 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 117052 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 117052 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 117052' 00:37:03.642 killing process with pid 117052 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@965 -- # kill 117052 00:37:03.642 [2024-05-15 13:55:15.073282] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@970 -- # wait 117052 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:03.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:03.642 Waiting for block devices as requested 00:37:03.642 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:03.642 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.642 13:55:15 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:03.642 00:37:03.642 real 1m0.060s 00:37:03.642 user 3m52.030s 00:37:03.642 sys 0m14.417s 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:03.642 13:55:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:03.642 ************************************ 00:37:03.642 END TEST nvmf_dif 00:37:03.642 ************************************ 00:37:03.642 13:55:15 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:03.642 13:55:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:03.642 13:55:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:03.642 13:55:15 -- common/autotest_common.sh@10 -- # set +x 00:37:03.642 ************************************ 00:37:03.642 START TEST nvmf_abort_qd_sizes 00:37:03.642 ************************************ 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:03.642 * Looking for test storage... 00:37:03.642 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:37:03.642 13:55:15 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:37:03.642 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:37:03.642 Cannot find device "nvmf_tgt_br" 00:37:03.642 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:37:03.642 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:37:03.642 Cannot find device "nvmf_tgt_br2" 00:37:03.642 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:37:03.643 Cannot find device "nvmf_tgt_br" 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:37:03.643 Cannot find device "nvmf_tgt_br2" 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:37:03.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:37:03.643 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:37:03.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:03.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:37:03.643 00:37:03.643 --- 10.0.0.2 ping statistics --- 00:37:03.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.643 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:37:03.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:37:03.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:37:03.643 00:37:03.643 --- 10.0.0.3 ping statistics --- 00:37:03.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.643 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:37:03.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:03.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:37:03.643 00:37:03.643 --- 10.0.0.1 ping statistics --- 00:37:03.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:03.643 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:37:03.643 13:55:16 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:03.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:04.158 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:04.158 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:04.158 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=118378 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:04.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 118378 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 118378 ']' 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:04.159 13:55:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:04.159 [2024-05-15 13:55:17.239465] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:37:04.159 [2024-05-15 13:55:17.239555] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:04.415 [2024-05-15 13:55:17.359545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:04.415 [2024-05-15 13:55:17.378718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:04.415 [2024-05-15 13:55:17.481634] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:04.415 [2024-05-15 13:55:17.481917] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:04.415 [2024-05-15 13:55:17.482074] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:04.415 [2024-05-15 13:55:17.482255] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:04.415 [2024-05-15 13:55:17.482450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:04.415 [2024-05-15 13:55:17.482591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:04.415 [2024-05-15 13:55:17.482715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:04.415 [2024-05-15 13:55:17.484032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:37:04.415 [2024-05-15 13:55:17.484041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:37:05.349 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:05.350 13:55:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:05.350 ************************************ 00:37:05.350 START TEST spdk_target_abort 00:37:05.350 ************************************ 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:05.350 spdk_targetn1 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:05.350 [2024-05-15 13:55:18.425994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.350 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:05.609 [2024-05-15 13:55:18.457924] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:05.609 [2024-05-15 13:55:18.458381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:05.609 13:55:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:08.891 Initializing NVMe Controllers 00:37:08.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:08.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:08.891 Initialization complete. Launching workers. 00:37:08.891 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11101, failed: 0 00:37:08.891 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1045, failed to submit 10056 00:37:08.891 success 811, unsuccess 234, failed 0 00:37:08.891 13:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:08.891 13:55:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:12.238 Initializing NVMe Controllers 00:37:12.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:12.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:12.238 Initialization complete. Launching workers. 00:37:12.238 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5913, failed: 0 00:37:12.238 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 4671 00:37:12.238 success 264, unsuccess 978, failed 0 00:37:12.238 13:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:12.238 13:55:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:15.518 Initializing NVMe Controllers 00:37:15.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:15.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:15.518 Initialization complete. Launching workers. 00:37:15.518 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29908, failed: 0 00:37:15.518 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2648, failed to submit 27260 00:37:15.518 success 406, unsuccess 2242, failed 0 00:37:15.518 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:15.518 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.518 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.518 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.518 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:15.518 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:15.518 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 118378 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 118378 ']' 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 118378 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 118378 00:37:15.777 killing process with pid 118378 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 118378' 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 118378 00:37:15.777 [2024-05-15 13:55:28.742763] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:15.777 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 118378 00:37:16.035 ************************************ 00:37:16.035 END TEST spdk_target_abort 00:37:16.035 ************************************ 00:37:16.035 00:37:16.035 real 0m10.618s 00:37:16.035 user 0m43.572s 00:37:16.035 sys 0m1.709s 00:37:16.035 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:16.035 13:55:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:16.035 13:55:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:16.035 13:55:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:16.035 13:55:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:16.035 13:55:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:16.035 ************************************ 00:37:16.035 START TEST kernel_target_abort 00:37:16.035 ************************************ 00:37:16.035 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:37:16.035 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:16.035 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:37:16.035 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:16.036 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:16.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:16.294 Waiting for block devices as requested 00:37:16.572 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:16.572 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:37:16.572 No valid GPT data, bailing 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:37:16.572 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:37:16.830 No valid GPT data, bailing 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:37:16.830 No valid GPT data, bailing 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:37:16.830 No valid GPT data, bailing 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd --hostid=1922f591-978b-44b0-bc45-c969115d53dd -a 10.0.0.1 -t tcp -s 4420 00:37:16.830 00:37:16.830 Discovery Log Number of Records 2, Generation counter 2 00:37:16.830 =====Discovery Log Entry 0====== 00:37:16.830 trtype: tcp 00:37:16.830 adrfam: ipv4 00:37:16.830 subtype: current discovery subsystem 00:37:16.830 treq: not specified, sq flow control disable supported 00:37:16.830 portid: 1 00:37:16.830 trsvcid: 4420 00:37:16.830 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:16.830 traddr: 10.0.0.1 00:37:16.830 eflags: none 00:37:16.830 sectype: none 00:37:16.830 =====Discovery Log Entry 1====== 00:37:16.830 trtype: tcp 00:37:16.830 adrfam: ipv4 00:37:16.830 subtype: nvme subsystem 00:37:16.830 treq: not specified, sq flow control disable supported 00:37:16.830 portid: 1 00:37:16.830 trsvcid: 4420 00:37:16.830 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:16.830 traddr: 10.0.0.1 00:37:16.830 eflags: none 00:37:16.830 sectype: none 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.830 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:16.831 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.831 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:16.831 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:16.831 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:16.831 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:16.831 13:55:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:20.111 Initializing NVMe Controllers 00:37:20.111 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:20.111 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:20.111 Initialization complete. Launching workers. 00:37:20.111 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37241, failed: 0 00:37:20.111 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37241, failed to submit 0 00:37:20.111 success 0, unsuccess 37241, failed 0 00:37:20.111 13:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:20.111 13:55:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:23.393 Initializing NVMe Controllers 00:37:23.393 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:23.393 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:23.393 Initialization complete. Launching workers. 00:37:23.393 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72563, failed: 0 00:37:23.393 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31667, failed to submit 40896 00:37:23.393 success 0, unsuccess 31667, failed 0 00:37:23.393 13:55:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:23.393 13:55:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:26.673 Initializing NVMe Controllers 00:37:26.673 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:26.673 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:26.673 Initialization complete. Launching workers. 00:37:26.673 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80821, failed: 0 00:37:26.673 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20192, failed to submit 60629 00:37:26.673 success 0, unsuccess 20192, failed 0 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:37:26.673 13:55:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:27.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:28.611 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:37:28.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:37:28.611 00:37:28.611 real 0m12.577s 00:37:28.611 user 0m6.300s 00:37:28.611 sys 0m3.631s 00:37:28.611 ************************************ 00:37:28.611 END TEST kernel_target_abort 00:37:28.611 ************************************ 00:37:28.611 13:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:28.611 13:55:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:37:28.611 rmmod nvme_tcp 00:37:28.611 rmmod nvme_fabrics 00:37:28.611 rmmod nvme_keyring 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 118378 ']' 00:37:28.611 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 118378 00:37:28.870 Process with pid 118378 is not found 00:37:28.870 13:55:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 118378 ']' 00:37:28.870 13:55:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 118378 00:37:28.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (118378) - No such process 00:37:28.870 13:55:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 118378 is not found' 00:37:28.870 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:37:28.870 13:55:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:37:29.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:29.127 Waiting for block devices as requested 00:37:29.127 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:37:29.127 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:37:29.386 00:37:29.386 real 0m26.410s 00:37:29.386 user 0m51.053s 00:37:29.386 sys 0m6.651s 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:29.386 13:55:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:29.386 ************************************ 00:37:29.386 END TEST nvmf_abort_qd_sizes 00:37:29.386 ************************************ 00:37:29.386 13:55:42 -- spdk/autotest.sh@291 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:37:29.386 13:55:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:29.386 13:55:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:29.386 13:55:42 -- common/autotest_common.sh@10 -- # set +x 00:37:29.386 ************************************ 00:37:29.386 START TEST keyring_file 00:37:29.386 ************************************ 00:37:29.386 13:55:42 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:37:29.386 * Looking for test storage... 00:37:29.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:37:29.386 13:55:42 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:37:29.386 13:55:42 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1922f591-978b-44b0-bc45-c969115d53dd 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=1922f591-978b-44b0-bc45-c969115d53dd 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:29.386 13:55:42 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.386 13:55:42 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.386 13:55:42 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.386 13:55:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.386 13:55:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.386 13:55:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.386 13:55:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:29.386 13:55:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@47 -- # : 0 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:29.386 13:55:42 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:29.387 13:55:42 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:29.387 13:55:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:29.387 13:55:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:29.387 13:55:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:29.387 13:55:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:29.387 13:55:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:29.387 13:55:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bCr92Cd6gp 00:37:29.387 13:55:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:29.387 13:55:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:29.387 13:55:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:29.387 13:55:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:29.387 13:55:42 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:29.387 13:55:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:29.387 13:55:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bCr92Cd6gp 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bCr92Cd6gp 00:37:29.645 13:55:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bCr92Cd6gp 00:37:29.645 13:55:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.xDI430RIZE 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:29.645 13:55:42 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:29.645 13:55:42 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:29.645 13:55:42 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:29.645 13:55:42 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:29.645 13:55:42 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:29.645 13:55:42 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xDI430RIZE 00:37:29.645 13:55:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.xDI430RIZE 00:37:29.645 13:55:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.xDI430RIZE 00:37:29.645 13:55:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=119253 00:37:29.646 13:55:42 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:29.646 13:55:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 119253 00:37:29.646 13:55:42 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 119253 ']' 00:37:29.646 13:55:42 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:29.646 13:55:42 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:29.646 13:55:42 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:29.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:29.646 13:55:42 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:29.646 13:55:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:29.646 [2024-05-15 13:55:42.645289] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:37:29.646 [2024-05-15 13:55:42.646359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119253 ] 00:37:29.904 [2024-05-15 13:55:42.769232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:29.904 [2024-05-15 13:55:42.789639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:29.904 [2024-05-15 13:55:42.895742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.848 13:55:43 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:30.848 13:55:43 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:30.848 13:55:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:30.848 13:55:43 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.848 13:55:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.849 [2024-05-15 13:55:43.661696] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:30.849 null0 00:37:30.849 [2024-05-15 13:55:43.693629] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:37:30.849 [2024-05-15 13:55:43.693729] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:30.849 [2024-05-15 13:55:43.693951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:30.849 [2024-05-15 13:55:43.701678] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:30.849 13:55:43 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.849 [2024-05-15 13:55:43.717697] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:30.849 2024/05/15 13:55:43 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:37:30.849 request: 00:37:30.849 { 00:37:30.849 "method": "nvmf_subsystem_add_listener", 00:37:30.849 "params": { 00:37:30.849 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:30.849 "secure_channel": false, 00:37:30.849 "listen_address": { 00:37:30.849 "trtype": "tcp", 00:37:30.849 "traddr": "127.0.0.1", 00:37:30.849 "trsvcid": "4420" 00:37:30.849 } 00:37:30.849 } 00:37:30.849 } 00:37:30.849 Got JSON-RPC error response 00:37:30.849 GoRPCClient: error on JSON-RPC call 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:30.849 13:55:43 keyring_file -- keyring/file.sh@46 -- # bperfpid=119284 00:37:30.849 13:55:43 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:30.849 13:55:43 keyring_file -- keyring/file.sh@48 -- # waitforlisten 119284 /var/tmp/bperf.sock 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 119284 ']' 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:30.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:30.849 13:55:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:30.849 [2024-05-15 13:55:43.791849] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:37:30.849 [2024-05-15 13:55:43.791982] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119284 ] 00:37:30.849 [2024-05-15 13:55:43.918654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:30.849 [2024-05-15 13:55:43.938637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:31.108 [2024-05-15 13:55:44.037613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:32.042 13:55:44 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:32.042 13:55:44 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:32.042 13:55:44 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:32.042 13:55:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:32.042 13:55:45 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xDI430RIZE 00:37:32.042 13:55:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xDI430RIZE 00:37:32.300 13:55:45 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:37:32.300 13:55:45 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:37:32.300 13:55:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.300 13:55:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.300 13:55:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:32.557 13:55:45 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.bCr92Cd6gp == \/\t\m\p\/\t\m\p\.\b\C\r\9\2\C\d\6\g\p ]] 00:37:32.557 13:55:45 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:37:32.557 13:55:45 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:32.557 13:55:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.557 13:55:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.557 13:55:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:32.816 13:55:45 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.xDI430RIZE == \/\t\m\p\/\t\m\p\.\x\D\I\4\3\0\R\I\Z\E ]] 00:37:32.816 13:55:45 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:37:32.816 13:55:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:32.816 13:55:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:32.816 13:55:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:32.816 13:55:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:32.816 13:55:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.382 13:55:46 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:37:33.382 13:55:46 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:37:33.382 13:55:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.382 13:55:46 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:33.382 13:55:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.382 13:55:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:33.382 13:55:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:33.639 13:55:46 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:33.639 13:55:46 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.639 13:55:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:33.895 [2024-05-15 13:55:46.763795] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:33.895 nvme0n1 00:37:33.895 13:55:46 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:33.895 13:55:46 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:33.895 13:55:46 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:33.895 13:55:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:33.895 13:55:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:33.895 13:55:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.460 13:55:47 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:34.460 13:55:47 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:34.460 13:55:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:34.460 13:55:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:34.460 13:55:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:34.460 13:55:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:34.460 13:55:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:34.460 13:55:47 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:34.460 13:55:47 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:34.716 Running I/O for 1 seconds... 00:37:35.649 00:37:35.649 Latency(us) 00:37:35.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:35.649 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:35.649 nvme0n1 : 1.01 10791.32 42.15 0.00 0.00 11822.49 5719.51 18826.71 00:37:35.649 =================================================================================================================== 00:37:35.649 Total : 10791.32 42.15 0.00 0.00 11822.49 5719.51 18826.71 00:37:35.649 0 00:37:35.649 13:55:48 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:35.649 13:55:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:35.908 13:55:48 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:35.908 13:55:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:35.908 13:55:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:35.908 13:55:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:35.908 13:55:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:35.908 13:55:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.474 13:55:49 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:36.474 13:55:49 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:36.474 13:55:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:36.474 13:55:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.474 13:55:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.474 13:55:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:36.474 13:55:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:36.732 13:55:49 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:36.732 13:55:49 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:36.732 13:55:49 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:36.732 13:55:49 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:36.732 13:55:49 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:36.732 13:55:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:36.732 13:55:49 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:36.732 13:55:49 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:36.732 13:55:49 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:36.732 13:55:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:36.991 [2024-05-15 13:55:49.948837] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:36.991 [2024-05-15 13:55:49.949514] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2439e90 (107): Transport endpoint is not connected 00:37:36.991 [2024-05-15 13:55:49.950502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2439e90 (9): Bad file descriptor 00:37:36.991 [2024-05-15 13:55:49.951498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:36.991 [2024-05-15 13:55:49.951519] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:36.991 [2024-05-15 13:55:49.951530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:36.991 2024/05/15 13:55:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:37:36.991 request: 00:37:36.991 { 00:37:36.991 "method": "bdev_nvme_attach_controller", 00:37:36.991 "params": { 00:37:36.991 "name": "nvme0", 00:37:36.991 "trtype": "tcp", 00:37:36.991 "traddr": "127.0.0.1", 00:37:36.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.991 "adrfam": "ipv4", 00:37:36.991 "trsvcid": "4420", 00:37:36.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.991 "psk": "key1" 00:37:36.991 } 00:37:36.991 } 00:37:36.991 Got JSON-RPC error response 00:37:36.991 GoRPCClient: error on JSON-RPC call 00:37:36.991 13:55:49 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:36.991 13:55:49 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:36.991 13:55:49 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:36.991 13:55:49 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:36.991 13:55:49 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:36.991 13:55:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:36.991 13:55:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:36.991 13:55:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:36.991 13:55:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:36.991 13:55:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.249 13:55:50 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:37.249 13:55:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:37.249 13:55:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:37.249 13:55:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:37.249 13:55:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:37.249 13:55:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:37.249 13:55:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:37.507 13:55:50 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:37.507 13:55:50 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:37.507 13:55:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:37.765 13:55:50 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:37.765 13:55:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:38.023 13:55:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:38.023 13:55:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:38.023 13:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:38.282 13:55:51 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:38.282 13:55:51 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.bCr92Cd6gp 00:37:38.282 13:55:51 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:38.282 13:55:51 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:38.282 13:55:51 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:38.282 13:55:51 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:38.282 13:55:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.282 13:55:51 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:38.282 13:55:51 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.282 13:55:51 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:38.282 13:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:38.540 [2024-05-15 13:55:51.572919] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bCr92Cd6gp': 0100660 00:37:38.540 [2024-05-15 13:55:51.572969] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:38.540 2024/05/15 13:55:51 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.bCr92Cd6gp], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:37:38.540 request: 00:37:38.540 { 00:37:38.540 "method": "keyring_file_add_key", 00:37:38.540 "params": { 00:37:38.540 "name": "key0", 00:37:38.540 "path": "/tmp/tmp.bCr92Cd6gp" 00:37:38.540 } 00:37:38.540 } 00:37:38.540 Got JSON-RPC error response 00:37:38.540 GoRPCClient: error on JSON-RPC call 00:37:38.540 13:55:51 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:38.540 13:55:51 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:38.540 13:55:51 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:38.540 13:55:51 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:38.540 13:55:51 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.bCr92Cd6gp 00:37:38.540 13:55:51 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:38.540 13:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bCr92Cd6gp 00:37:38.798 13:55:51 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.bCr92Cd6gp 00:37:38.798 13:55:51 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:38.798 13:55:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:38.798 13:55:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:38.798 13:55:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:38.798 13:55:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:38.798 13:55:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:39.366 13:55:52 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:39.366 13:55:52 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.366 13:55:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:39.366 [2024-05-15 13:55:52.425110] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bCr92Cd6gp': No such file or directory 00:37:39.366 [2024-05-15 13:55:52.425166] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:39.366 [2024-05-15 13:55:52.425193] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:39.366 [2024-05-15 13:55:52.425202] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:39.366 [2024-05-15 13:55:52.425211] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:39.366 2024/05/15 13:55:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:37:39.366 request: 00:37:39.366 { 00:37:39.366 "method": "bdev_nvme_attach_controller", 00:37:39.366 "params": { 00:37:39.366 "name": "nvme0", 00:37:39.366 "trtype": "tcp", 00:37:39.366 "traddr": "127.0.0.1", 00:37:39.366 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:39.366 "adrfam": "ipv4", 00:37:39.366 "trsvcid": "4420", 00:37:39.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:39.366 "psk": "key0" 00:37:39.366 } 00:37:39.366 } 00:37:39.366 Got JSON-RPC error response 00:37:39.366 GoRPCClient: error on JSON-RPC call 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:39.366 13:55:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:39.366 13:55:52 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:39.366 13:55:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:39.933 13:55:52 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.f53RCRX2ey 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:39.933 13:55:52 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:39.933 13:55:52 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:39.933 13:55:52 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:39.933 13:55:52 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:39.933 13:55:52 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:39.933 13:55:52 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.f53RCRX2ey 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.f53RCRX2ey 00:37:39.933 13:55:52 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.f53RCRX2ey 00:37:39.933 13:55:52 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f53RCRX2ey 00:37:39.933 13:55:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f53RCRX2ey 00:37:40.191 13:55:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.191 13:55:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:40.449 nvme0n1 00:37:40.449 13:55:53 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:40.449 13:55:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:40.449 13:55:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:40.449 13:55:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:40.449 13:55:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.449 13:55:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:40.706 13:55:53 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:40.706 13:55:53 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:40.706 13:55:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:40.963 13:55:53 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:40.963 13:55:53 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:40.963 13:55:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:40.963 13:55:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:40.963 13:55:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.220 13:55:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:41.221 13:55:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:41.221 13:55:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:41.221 13:55:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:41.221 13:55:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:41.221 13:55:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:41.221 13:55:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:41.478 13:55:54 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:41.479 13:55:54 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:41.479 13:55:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:42.083 13:55:54 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:42.083 13:55:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:42.083 13:55:54 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:42.083 13:55:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:42.083 13:55:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.f53RCRX2ey 00:37:42.083 13:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.f53RCRX2ey 00:37:42.341 13:55:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.xDI430RIZE 00:37:42.341 13:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.xDI430RIZE 00:37:42.599 13:55:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:42.599 13:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:42.858 nvme0n1 00:37:42.858 13:55:55 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:42.858 13:55:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:43.425 13:55:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:43.425 "subsystems": [ 00:37:43.425 { 00:37:43.425 "subsystem": "keyring", 00:37:43.425 "config": [ 00:37:43.425 { 00:37:43.425 "method": "keyring_file_add_key", 00:37:43.425 "params": { 00:37:43.425 "name": "key0", 00:37:43.425 "path": "/tmp/tmp.f53RCRX2ey" 00:37:43.425 } 00:37:43.425 }, 00:37:43.425 { 00:37:43.425 "method": "keyring_file_add_key", 00:37:43.425 "params": { 00:37:43.425 "name": "key1", 00:37:43.425 "path": "/tmp/tmp.xDI430RIZE" 00:37:43.425 } 00:37:43.425 } 00:37:43.425 ] 00:37:43.425 }, 00:37:43.425 { 00:37:43.425 "subsystem": "iobuf", 00:37:43.425 "config": [ 00:37:43.425 { 00:37:43.425 "method": "iobuf_set_options", 00:37:43.425 "params": { 00:37:43.425 "large_bufsize": 135168, 00:37:43.425 "large_pool_count": 1024, 00:37:43.425 "small_bufsize": 8192, 00:37:43.425 "small_pool_count": 8192 00:37:43.425 } 00:37:43.425 } 00:37:43.425 ] 00:37:43.425 }, 00:37:43.425 { 00:37:43.425 "subsystem": "sock", 00:37:43.425 "config": [ 00:37:43.425 { 00:37:43.425 "method": "sock_impl_set_options", 00:37:43.425 "params": { 00:37:43.425 "enable_ktls": false, 00:37:43.425 "enable_placement_id": 0, 00:37:43.425 "enable_quickack": false, 00:37:43.425 "enable_recv_pipe": true, 00:37:43.426 "enable_zerocopy_send_client": false, 00:37:43.426 "enable_zerocopy_send_server": true, 00:37:43.426 "impl_name": "posix", 00:37:43.426 "recv_buf_size": 2097152, 00:37:43.426 "send_buf_size": 2097152, 00:37:43.426 "tls_version": 0, 00:37:43.426 "zerocopy_threshold": 0 00:37:43.426 } 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "method": "sock_impl_set_options", 00:37:43.426 "params": { 00:37:43.426 "enable_ktls": false, 00:37:43.426 "enable_placement_id": 0, 00:37:43.426 "enable_quickack": false, 00:37:43.426 "enable_recv_pipe": true, 00:37:43.426 "enable_zerocopy_send_client": false, 00:37:43.426 "enable_zerocopy_send_server": true, 00:37:43.426 "impl_name": "ssl", 00:37:43.426 "recv_buf_size": 4096, 00:37:43.426 "send_buf_size": 4096, 00:37:43.426 "tls_version": 0, 00:37:43.426 "zerocopy_threshold": 0 00:37:43.426 } 00:37:43.426 } 00:37:43.426 ] 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "subsystem": "vmd", 00:37:43.426 "config": [] 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "subsystem": "accel", 00:37:43.426 "config": [ 00:37:43.426 { 00:37:43.426 "method": "accel_set_options", 00:37:43.426 "params": { 00:37:43.426 "buf_count": 2048, 00:37:43.426 "large_cache_size": 16, 00:37:43.426 "sequence_count": 2048, 00:37:43.426 "small_cache_size": 128, 00:37:43.426 "task_count": 2048 00:37:43.426 } 00:37:43.426 } 00:37:43.426 ] 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "subsystem": "bdev", 00:37:43.426 "config": [ 00:37:43.426 { 00:37:43.426 "method": "bdev_set_options", 00:37:43.426 "params": { 00:37:43.426 "bdev_auto_examine": true, 00:37:43.426 "bdev_io_cache_size": 256, 00:37:43.426 "bdev_io_pool_size": 65535, 00:37:43.426 "iobuf_large_cache_size": 16, 00:37:43.426 "iobuf_small_cache_size": 128 00:37:43.426 } 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "method": "bdev_raid_set_options", 00:37:43.426 "params": { 00:37:43.426 "process_window_size_kb": 1024 00:37:43.426 } 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "method": "bdev_iscsi_set_options", 00:37:43.426 "params": { 00:37:43.426 "timeout_sec": 30 00:37:43.426 } 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "method": "bdev_nvme_set_options", 00:37:43.426 "params": { 00:37:43.426 "action_on_timeout": "none", 00:37:43.426 "allow_accel_sequence": false, 00:37:43.426 "arbitration_burst": 0, 00:37:43.426 "bdev_retry_count": 3, 00:37:43.426 "ctrlr_loss_timeout_sec": 0, 00:37:43.426 "delay_cmd_submit": true, 00:37:43.426 "dhchap_dhgroups": [ 00:37:43.426 "null", 00:37:43.426 "ffdhe2048", 00:37:43.426 "ffdhe3072", 00:37:43.426 "ffdhe4096", 00:37:43.426 "ffdhe6144", 00:37:43.426 "ffdhe8192" 00:37:43.426 ], 00:37:43.426 "dhchap_digests": [ 00:37:43.426 "sha256", 00:37:43.426 "sha384", 00:37:43.426 "sha512" 00:37:43.426 ], 00:37:43.426 "disable_auto_failback": false, 00:37:43.426 "fast_io_fail_timeout_sec": 0, 00:37:43.426 "generate_uuids": false, 00:37:43.426 "high_priority_weight": 0, 00:37:43.426 "io_path_stat": false, 00:37:43.426 "io_queue_requests": 512, 00:37:43.426 "keep_alive_timeout_ms": 10000, 00:37:43.426 "low_priority_weight": 0, 00:37:43.426 "medium_priority_weight": 0, 00:37:43.426 "nvme_adminq_poll_period_us": 10000, 00:37:43.426 "nvme_error_stat": false, 00:37:43.426 "nvme_ioq_poll_period_us": 0, 00:37:43.426 "rdma_cm_event_timeout_ms": 0, 00:37:43.426 "rdma_max_cq_size": 0, 00:37:43.426 "rdma_srq_size": 0, 00:37:43.426 "reconnect_delay_sec": 0, 00:37:43.426 "timeout_admin_us": 0, 00:37:43.426 "timeout_us": 0, 00:37:43.426 "transport_ack_timeout": 0, 00:37:43.426 "transport_retry_count": 4, 00:37:43.426 "transport_tos": 0 00:37:43.426 } 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "method": "bdev_nvme_attach_controller", 00:37:43.426 "params": { 00:37:43.426 "adrfam": "IPv4", 00:37:43.426 "ctrlr_loss_timeout_sec": 0, 00:37:43.426 "ddgst": false, 00:37:43.426 "fast_io_fail_timeout_sec": 0, 00:37:43.426 "hdgst": false, 00:37:43.426 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:43.426 "name": "nvme0", 00:37:43.426 "prchk_guard": false, 00:37:43.426 "prchk_reftag": false, 00:37:43.426 "psk": "key0", 00:37:43.426 "reconnect_delay_sec": 0, 00:37:43.426 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.426 "traddr": "127.0.0.1", 00:37:43.426 "trsvcid": "4420", 00:37:43.426 "trtype": "TCP" 00:37:43.426 } 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "method": "bdev_nvme_set_hotplug", 00:37:43.426 "params": { 00:37:43.426 "enable": false, 00:37:43.426 "period_us": 100000 00:37:43.426 } 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "method": "bdev_wait_for_examine" 00:37:43.426 } 00:37:43.426 ] 00:37:43.426 }, 00:37:43.426 { 00:37:43.426 "subsystem": "nbd", 00:37:43.427 "config": [] 00:37:43.427 } 00:37:43.427 ] 00:37:43.427 }' 00:37:43.427 13:55:56 keyring_file -- keyring/file.sh@114 -- # killprocess 119284 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 119284 ']' 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@950 -- # kill -0 119284 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119284 00:37:43.427 killing process with pid 119284 00:37:43.427 Received shutdown signal, test time was about 1.000000 seconds 00:37:43.427 00:37:43.427 Latency(us) 00:37:43.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.427 =================================================================================================================== 00:37:43.427 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119284' 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@965 -- # kill 119284 00:37:43.427 13:55:56 keyring_file -- common/autotest_common.sh@970 -- # wait 119284 00:37:43.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:43.686 13:55:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=119775 00:37:43.686 13:55:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 119775 /var/tmp/bperf.sock 00:37:43.686 13:55:56 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:43.686 13:55:56 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 119775 ']' 00:37:43.686 13:55:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:43.686 "subsystems": [ 00:37:43.686 { 00:37:43.686 "subsystem": "keyring", 00:37:43.686 "config": [ 00:37:43.686 { 00:37:43.686 "method": "keyring_file_add_key", 00:37:43.686 "params": { 00:37:43.686 "name": "key0", 00:37:43.686 "path": "/tmp/tmp.f53RCRX2ey" 00:37:43.686 } 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "method": "keyring_file_add_key", 00:37:43.686 "params": { 00:37:43.686 "name": "key1", 00:37:43.686 "path": "/tmp/tmp.xDI430RIZE" 00:37:43.686 } 00:37:43.686 } 00:37:43.686 ] 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "subsystem": "iobuf", 00:37:43.686 "config": [ 00:37:43.686 { 00:37:43.686 "method": "iobuf_set_options", 00:37:43.686 "params": { 00:37:43.686 "large_bufsize": 135168, 00:37:43.686 "large_pool_count": 1024, 00:37:43.686 "small_bufsize": 8192, 00:37:43.686 "small_pool_count": 8192 00:37:43.686 } 00:37:43.686 } 00:37:43.686 ] 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "subsystem": "sock", 00:37:43.686 "config": [ 00:37:43.686 { 00:37:43.686 "method": "sock_impl_set_options", 00:37:43.686 "params": { 00:37:43.686 "enable_ktls": false, 00:37:43.686 "enable_placement_id": 0, 00:37:43.686 "enable_quickack": false, 00:37:43.686 "enable_recv_pipe": true, 00:37:43.686 "enable_zerocopy_send_client": false, 00:37:43.686 "enable_zerocopy_send_server": true, 00:37:43.686 "impl_name": "posix", 00:37:43.686 "recv_buf_size": 2097152, 00:37:43.686 "send_buf_size": 2097152, 00:37:43.686 "tls_version": 0, 00:37:43.686 "zerocopy_threshold": 0 00:37:43.686 } 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "method": "sock_impl_set_options", 00:37:43.686 "params": { 00:37:43.686 "enable_ktls": false, 00:37:43.686 "enable_placement_id": 0, 00:37:43.686 "enable_quickack": false, 00:37:43.686 "enable_recv_pipe": true, 00:37:43.686 "enable_zerocopy_send_client": false, 00:37:43.686 "enable_zerocopy_send_server": true, 00:37:43.686 "impl_name": "ssl", 00:37:43.686 "recv_buf_size": 4096, 00:37:43.686 "send_buf_size": 4096, 00:37:43.686 "tls_version": 0, 00:37:43.686 "zerocopy_threshold": 0 00:37:43.686 } 00:37:43.686 } 00:37:43.686 ] 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "subsystem": "vmd", 00:37:43.686 "config": [] 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "subsystem": "accel", 00:37:43.686 "config": [ 00:37:43.686 { 00:37:43.686 "method": "accel_set_options", 00:37:43.686 "params": { 00:37:43.686 "buf_count": 2048, 00:37:43.686 "large_cache_size": 16, 00:37:43.686 "sequence_count": 2048, 00:37:43.686 "small_cache_size": 128, 00:37:43.686 "task_count": 2048 00:37:43.686 } 00:37:43.686 } 00:37:43.686 ] 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "subsystem": "bdev", 00:37:43.686 "config": [ 00:37:43.686 { 00:37:43.686 "method": "bdev_set_options", 00:37:43.686 "params": { 00:37:43.686 "bdev_auto_examine": true, 00:37:43.686 "bdev_io_cache_size": 256, 00:37:43.686 "bdev_io_pool_size": 65535, 00:37:43.686 "iobuf_large_cache_size": 16, 00:37:43.686 "iobuf_small_cache_size": 128 00:37:43.686 } 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "method": "bdev_raid_set_options", 00:37:43.686 "params": { 00:37:43.686 "process_window_size_kb": 1024 00:37:43.686 } 00:37:43.686 }, 00:37:43.686 { 00:37:43.686 "method": "bdev_iscsi_set_options", 00:37:43.686 "params": { 00:37:43.686 "timeout_sec": 30 00:37:43.686 } 00:37:43.687 }, 00:37:43.687 { 00:37:43.687 "method": "bdev_nvme_set_options", 00:37:43.687 "params": { 00:37:43.687 "action_on_timeout": "none", 00:37:43.687 "allow_accel_sequence": false, 00:37:43.687 "arbitration_burst": 0, 00:37:43.687 "bdev_retry_count": 3, 00:37:43.687 "ctrlr_loss_timeout_sec": 0, 00:37:43.687 "delay_cmd_submit": true, 00:37:43.687 "dhchap_dhgroups": [ 00:37:43.687 "null", 00:37:43.687 "ffdhe2048", 00:37:43.687 "ffdhe3072", 00:37:43.687 "ffdhe4096", 00:37:43.687 "ffdhe6144", 00:37:43.687 "ffdhe8192" 00:37:43.687 ], 00:37:43.687 "dhchap_digests": [ 00:37:43.687 "sha256", 00:37:43.687 "sha384", 00:37:43.687 "sha512" 00:37:43.687 ], 00:37:43.687 "disable_auto_failback": false, 00:37:43.687 "fast_io_fail_timeout_sec": 0, 00:37:43.687 "generate_uuids": false, 00:37:43.687 "high_priority_weight": 0, 00:37:43.687 "io_path_stat": false, 00:37:43.687 "io_queue_requests": 512, 00:37:43.687 "keep_alive_timeout_ms": 10000, 00:37:43.687 "low_priority_weight": 0, 00:37:43.687 "medium_priority_weight": 0, 00:37:43.687 "nvme_adminq_poll_period_us": 10000, 00:37:43.687 "nvme_error_stat": false, 00:37:43.687 "nvme_ioq_poll_period_us": 0, 00:37:43.687 "rdma_cm_event_timeout_ms": 0, 00:37:43.687 "rdma_max_cq_size": 0, 00:37:43.687 "rdma_srq_size": 0, 00:37:43.687 "reconnect_delay_sec": 0, 00:37:43.687 "timeout_admin_us": 0, 00:37:43.687 "timeout_us": 0, 00:37:43.687 "transport_ack_timeout": 0, 00:37:43.687 "transport_retry_count": 4, 00:37:43.687 "transport_tos": 0 00:37:43.687 } 00:37:43.687 }, 00:37:43.687 { 00:37:43.687 "method": "bdev_nvme_attach_controller", 00:37:43.687 "params": { 00:37:43.687 "adrfam": "IPv4", 00:37:43.687 "ctrlr_loss_timeout_sec": 0, 00:37:43.687 "ddgst": false, 00:37:43.687 "fast_io_fail_timeout_sec": 0, 00:37:43.687 "hdgst": false, 00:37:43.687 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:43.687 "name": "nvme0", 00:37:43.687 "prchk_guard": false, 00:37:43.687 "prchk_reftag": false, 00:37:43.687 "psk": "key0", 00:37:43.687 "reconnect_delay_sec": 0, 00:37:43.687 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:43.687 "traddr": "127.0.0.1", 00:37:43.687 "trsvcid": "4420", 00:37:43.687 "trtype": "TCP" 00:37:43.687 } 00:37:43.687 }, 00:37:43.687 { 00:37:43.687 "method": "bdev_nvme_set_hotplug", 00:37:43.687 "params": { 00:37:43.687 "enable": false, 00:37:43.687 "period_us": 100000 00:37:43.687 } 00:37:43.687 }, 00:37:43.687 { 00:37:43.687 "method": "bdev_wait_for_examine" 00:37:43.687 } 00:37:43.687 ] 00:37:43.687 }, 00:37:43.687 { 00:37:43.687 "subsystem": "nbd", 00:37:43.687 "config": [] 00:37:43.687 } 00:37:43.687 ] 00:37:43.687 }' 00:37:43.687 13:55:56 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:43.687 13:55:56 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:43.687 13:55:56 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:43.687 13:55:56 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:43.687 13:55:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:43.687 [2024-05-15 13:55:56.603212] Starting SPDK v24.05-pre git sha1 253cca4fc / DPDK 24.07.0-rc0 initialization... 00:37:43.687 [2024-05-15 13:55:56.603730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119775 ] 00:37:43.687 [2024-05-15 13:55:56.731345] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:37:43.687 [2024-05-15 13:55:56.748695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.945 [2024-05-15 13:55:56.850967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:43.945 [2024-05-15 13:55:57.028430] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:44.511 13:55:57 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:44.511 13:55:57 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:37:44.511 13:55:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:44.511 13:55:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.511 13:55:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:44.769 13:55:57 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:44.769 13:55:57 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:44.769 13:55:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:44.769 13:55:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:44.769 13:55:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:44.769 13:55:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:44.769 13:55:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.027 13:55:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:45.027 13:55:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:45.027 13:55:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:45.027 13:55:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:45.027 13:55:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:45.027 13:55:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:45.027 13:55:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:45.380 13:55:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:45.380 13:55:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:45.380 13:55:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:45.380 13:55:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:45.639 13:55:58 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:45.639 13:55:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:45.639 13:55:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.f53RCRX2ey /tmp/tmp.xDI430RIZE 00:37:45.639 13:55:58 keyring_file -- keyring/file.sh@20 -- # killprocess 119775 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 119775 ']' 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 119775 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119775 00:37:45.639 killing process with pid 119775 00:37:45.639 Received shutdown signal, test time was about 1.000000 seconds 00:37:45.639 00:37:45.639 Latency(us) 00:37:45.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.639 =================================================================================================================== 00:37:45.639 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119775' 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@965 -- # kill 119775 00:37:45.639 13:55:58 keyring_file -- common/autotest_common.sh@970 -- # wait 119775 00:37:45.900 13:55:58 keyring_file -- keyring/file.sh@21 -- # killprocess 119253 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 119253 ']' 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 119253 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 119253 00:37:45.900 killing process with pid 119253 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 119253' 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@965 -- # kill 119253 00:37:45.900 [2024-05-15 13:55:58.885586] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:37:45.900 [2024-05-15 13:55:58.885635] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:45.900 13:55:58 keyring_file -- common/autotest_common.sh@970 -- # wait 119253 00:37:46.469 00:37:46.469 real 0m16.916s 00:37:46.469 user 0m42.435s 00:37:46.469 sys 0m3.448s 00:37:46.469 13:55:59 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:46.469 ************************************ 00:37:46.469 END TEST keyring_file 00:37:46.469 ************************************ 00:37:46.469 13:55:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:46.469 13:55:59 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:37:46.469 13:55:59 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:46.469 13:55:59 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:46.470 13:55:59 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:37:46.470 13:55:59 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:46.470 13:55:59 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:37:46.470 13:55:59 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:46.470 13:55:59 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:46.470 13:55:59 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:46.470 13:55:59 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:37:46.470 13:55:59 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:37:46.470 13:55:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:37:46.470 13:55:59 -- common/autotest_common.sh@10 -- # set +x 00:37:46.470 13:55:59 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:37:46.470 13:55:59 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:37:46.470 13:55:59 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:37:46.470 13:55:59 -- common/autotest_common.sh@10 -- # set +x 00:37:47.845 INFO: APP EXITING 00:37:47.845 INFO: killing all VMs 00:37:47.845 INFO: killing vhost app 00:37:47.845 INFO: EXIT DONE 00:37:48.410 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:48.410 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:48.721 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:49.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:49.287 Cleaning 00:37:49.287 Removing: /var/run/dpdk/spdk0/config 00:37:49.287 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:49.287 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:49.287 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:49.287 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:49.287 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:49.287 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:49.287 Removing: /var/run/dpdk/spdk1/config 00:37:49.287 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:49.287 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:49.287 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:49.287 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:49.287 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:49.287 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:49.287 Removing: /var/run/dpdk/spdk2/config 00:37:49.287 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:49.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:49.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:49.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:49.288 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:49.288 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:49.288 Removing: /var/run/dpdk/spdk3/config 00:37:49.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:49.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:49.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:49.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:49.288 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:49.288 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:49.288 Removing: /var/run/dpdk/spdk4/config 00:37:49.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:49.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:49.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:49.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:49.288 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:49.288 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:49.288 Removing: /dev/shm/nvmf_trace.0 00:37:49.288 Removing: /dev/shm/spdk_tgt_trace.pid73915 00:37:49.288 Removing: /var/run/dpdk/spdk0 00:37:49.288 Removing: /var/run/dpdk/spdk1 00:37:49.288 Removing: /var/run/dpdk/spdk2 00:37:49.288 Removing: /var/run/dpdk/spdk3 00:37:49.288 Removing: /var/run/dpdk/spdk4 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100117 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100223 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100375 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100421 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100465 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100512 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100671 00:37:49.288 Removing: /var/run/dpdk/spdk_pid100819 00:37:49.288 Removing: /var/run/dpdk/spdk_pid101090 00:37:49.288 Removing: /var/run/dpdk/spdk_pid101215 00:37:49.288 Removing: /var/run/dpdk/spdk_pid101459 00:37:49.288 Removing: /var/run/dpdk/spdk_pid101591 00:37:49.288 Removing: /var/run/dpdk/spdk_pid101726 00:37:49.288 Removing: /var/run/dpdk/spdk_pid102065 00:37:49.288 Removing: /var/run/dpdk/spdk_pid102440 00:37:49.546 Removing: /var/run/dpdk/spdk_pid102443 00:37:49.546 Removing: /var/run/dpdk/spdk_pid104648 00:37:49.546 Removing: /var/run/dpdk/spdk_pid104946 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105442 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105446 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105782 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105796 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105814 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105842 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105851 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105996 00:37:49.546 Removing: /var/run/dpdk/spdk_pid105998 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106101 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106103 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106206 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106218 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106627 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106675 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106754 00:37:49.546 Removing: /var/run/dpdk/spdk_pid106803 00:37:49.546 Removing: /var/run/dpdk/spdk_pid107144 00:37:49.546 Removing: /var/run/dpdk/spdk_pid107392 00:37:49.546 Removing: /var/run/dpdk/spdk_pid107891 00:37:49.546 Removing: /var/run/dpdk/spdk_pid108484 00:37:49.546 Removing: /var/run/dpdk/spdk_pid109833 00:37:49.546 Removing: /var/run/dpdk/spdk_pid110423 00:37:49.546 Removing: /var/run/dpdk/spdk_pid110426 00:37:49.546 Removing: /var/run/dpdk/spdk_pid112382 00:37:49.546 Removing: /var/run/dpdk/spdk_pid112469 00:37:49.546 Removing: /var/run/dpdk/spdk_pid112560 00:37:49.546 Removing: /var/run/dpdk/spdk_pid112651 00:37:49.546 Removing: /var/run/dpdk/spdk_pid112809 00:37:49.547 Removing: /var/run/dpdk/spdk_pid112898 00:37:49.547 Removing: /var/run/dpdk/spdk_pid112984 00:37:49.547 Removing: /var/run/dpdk/spdk_pid113069 00:37:49.547 Removing: /var/run/dpdk/spdk_pid113417 00:37:49.547 Removing: /var/run/dpdk/spdk_pid114096 00:37:49.547 Removing: /var/run/dpdk/spdk_pid115447 00:37:49.547 Removing: /var/run/dpdk/spdk_pid115644 00:37:49.547 Removing: /var/run/dpdk/spdk_pid115925 00:37:49.547 Removing: /var/run/dpdk/spdk_pid116220 00:37:49.547 Removing: /var/run/dpdk/spdk_pid116762 00:37:49.547 Removing: /var/run/dpdk/spdk_pid116774 00:37:49.547 Removing: /var/run/dpdk/spdk_pid117126 00:37:49.547 Removing: /var/run/dpdk/spdk_pid117282 00:37:49.547 Removing: /var/run/dpdk/spdk_pid117434 00:37:49.547 Removing: /var/run/dpdk/spdk_pid117526 00:37:49.547 Removing: /var/run/dpdk/spdk_pid117677 00:37:49.547 Removing: /var/run/dpdk/spdk_pid117785 00:37:49.547 Removing: /var/run/dpdk/spdk_pid118447 00:37:49.547 Removing: /var/run/dpdk/spdk_pid118478 00:37:49.547 Removing: /var/run/dpdk/spdk_pid118518 00:37:49.547 Removing: /var/run/dpdk/spdk_pid118766 00:37:49.547 Removing: /var/run/dpdk/spdk_pid118800 00:37:49.547 Removing: /var/run/dpdk/spdk_pid118831 00:37:49.547 Removing: /var/run/dpdk/spdk_pid119253 00:37:49.547 Removing: /var/run/dpdk/spdk_pid119284 00:37:49.547 Removing: /var/run/dpdk/spdk_pid119775 00:37:49.547 Removing: /var/run/dpdk/spdk_pid73770 00:37:49.547 Removing: /var/run/dpdk/spdk_pid73915 00:37:49.547 Removing: /var/run/dpdk/spdk_pid74176 00:37:49.547 Removing: /var/run/dpdk/spdk_pid74270 00:37:49.547 Removing: /var/run/dpdk/spdk_pid74308 00:37:49.547 Removing: /var/run/dpdk/spdk_pid74419 00:37:49.547 Removing: /var/run/dpdk/spdk_pid74449 00:37:49.547 Removing: /var/run/dpdk/spdk_pid74567 00:37:49.547 Removing: /var/run/dpdk/spdk_pid74848 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75024 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75095 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75187 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75282 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75315 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75351 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75412 00:37:49.547 Removing: /var/run/dpdk/spdk_pid75531 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76164 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76228 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76297 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76325 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76404 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76432 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76511 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76539 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76585 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76615 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76667 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76697 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76843 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76873 00:37:49.547 Removing: /var/run/dpdk/spdk_pid76949 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77017 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77042 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77106 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77140 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77175 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77208 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77244 00:37:49.547 Removing: /var/run/dpdk/spdk_pid77273 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77313 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77342 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77377 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77411 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77445 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77480 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77513 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77549 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77578 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77618 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77647 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77690 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77722 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77762 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77792 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77862 00:37:49.805 Removing: /var/run/dpdk/spdk_pid77973 00:37:49.805 Removing: /var/run/dpdk/spdk_pid78397 00:37:49.805 Removing: /var/run/dpdk/spdk_pid85113 00:37:49.805 Removing: /var/run/dpdk/spdk_pid85457 00:37:49.805 Removing: /var/run/dpdk/spdk_pid87886 00:37:49.805 Removing: /var/run/dpdk/spdk_pid88259 00:37:49.805 Removing: /var/run/dpdk/spdk_pid88525 00:37:49.805 Removing: /var/run/dpdk/spdk_pid88571 00:37:49.805 Removing: /var/run/dpdk/spdk_pid89440 00:37:49.805 Removing: /var/run/dpdk/spdk_pid89490 00:37:49.805 Removing: /var/run/dpdk/spdk_pid89849 00:37:49.805 Removing: /var/run/dpdk/spdk_pid90378 00:37:49.805 Removing: /var/run/dpdk/spdk_pid90829 00:37:49.805 Removing: /var/run/dpdk/spdk_pid91795 00:37:49.805 Removing: /var/run/dpdk/spdk_pid92760 00:37:49.805 Removing: /var/run/dpdk/spdk_pid92877 00:37:49.805 Removing: /var/run/dpdk/spdk_pid92945 00:37:49.805 Removing: /var/run/dpdk/spdk_pid94418 00:37:49.805 Removing: /var/run/dpdk/spdk_pid94648 00:37:49.805 Removing: /var/run/dpdk/spdk_pid99681 00:37:49.805 Clean 00:37:49.805 13:56:02 -- common/autotest_common.sh@1447 -- # return 0 00:37:49.805 13:56:02 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:37:49.805 13:56:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:49.805 13:56:02 -- common/autotest_common.sh@10 -- # set +x 00:37:49.805 13:56:02 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:37:49.805 13:56:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:49.805 13:56:02 -- common/autotest_common.sh@10 -- # set +x 00:37:49.805 13:56:02 -- spdk/autotest.sh@383 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:49.805 13:56:02 -- spdk/autotest.sh@385 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:49.805 13:56:02 -- spdk/autotest.sh@385 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:49.805 13:56:02 -- spdk/autotest.sh@387 -- # hash lcov 00:37:49.805 13:56:02 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:49.805 13:56:02 -- spdk/autotest.sh@389 -- # hostname 00:37:49.805 13:56:02 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:50.064 geninfo: WARNING: invalid characters removed from testname! 00:38:22.251 13:56:30 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:22.251 13:56:34 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:24.876 13:56:37 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:27.407 13:56:40 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:29.946 13:56:43 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:33.240 13:56:45 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:35.769 13:56:48 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:35.769 13:56:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:35.770 13:56:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:35.770 13:56:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:35.770 13:56:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:35.770 13:56:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.770 13:56:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.770 13:56:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.770 13:56:48 -- paths/export.sh@5 -- $ export PATH 00:38:35.770 13:56:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:35.770 13:56:48 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:38:35.770 13:56:48 -- common/autobuild_common.sh@437 -- $ date +%s 00:38:35.770 13:56:48 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715781408.XXXXXX 00:38:35.770 13:56:48 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715781408.EDKlIE 00:38:35.770 13:56:48 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:38:35.770 13:56:48 -- common/autobuild_common.sh@443 -- $ '[' -n main ']' 00:38:35.770 13:56:48 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:38:35.770 13:56:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:38:35.770 13:56:48 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:38:35.770 13:56:48 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:38:35.770 13:56:48 -- common/autobuild_common.sh@453 -- $ get_config_params 00:38:35.770 13:56:48 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:38:35.770 13:56:48 -- common/autotest_common.sh@10 -- $ set +x 00:38:35.770 13:56:48 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:38:35.770 13:56:48 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:38:35.770 13:56:48 -- pm/common@17 -- $ local monitor 00:38:35.770 13:56:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:35.770 13:56:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:35.770 13:56:48 -- pm/common@21 -- $ date +%s 00:38:35.770 13:56:48 -- pm/common@25 -- $ sleep 1 00:38:35.770 13:56:48 -- pm/common@21 -- $ date +%s 00:38:35.770 13:56:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715781408 00:38:35.770 13:56:48 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715781408 00:38:35.770 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715781408_collect-vmstat.pm.log 00:38:35.770 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715781408_collect-cpu-load.pm.log 00:38:36.703 13:56:49 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:38:36.703 13:56:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:38:36.703 13:56:49 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:38:36.703 13:56:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:36.703 13:56:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:36.703 13:56:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:36.703 13:56:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:36.703 13:56:49 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:36.703 13:56:49 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:36.703 13:56:49 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:36.703 13:56:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:36.703 13:56:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:36.703 13:56:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:36.703 13:56:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:36.704 13:56:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:36.704 13:56:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:38:36.704 13:56:49 -- pm/common@44 -- $ pid=121439 00:38:36.704 13:56:49 -- pm/common@50 -- $ kill -TERM 121439 00:38:36.704 13:56:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:36.704 13:56:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:38:36.704 13:56:49 -- pm/common@44 -- $ pid=121441 00:38:36.704 13:56:49 -- pm/common@50 -- $ kill -TERM 121441 00:38:36.704 + [[ -n 5997 ]] 00:38:36.704 + sudo kill 5997 00:38:38.085 [Pipeline] } 00:38:38.102 [Pipeline] // timeout 00:38:38.107 [Pipeline] } 00:38:38.122 [Pipeline] // stage 00:38:38.127 [Pipeline] } 00:38:38.143 [Pipeline] // catchError 00:38:38.150 [Pipeline] stage 00:38:38.152 [Pipeline] { (Stop VM) 00:38:38.165 [Pipeline] sh 00:38:38.443 + vagrant halt 00:38:42.626 ==> default: Halting domain... 00:38:49.196 [Pipeline] sh 00:38:49.475 + vagrant destroy -f 00:38:53.661 ==> default: Removing domain... 00:38:53.672 [Pipeline] sh 00:38:53.992 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:38:54.001 [Pipeline] } 00:38:54.018 [Pipeline] // stage 00:38:54.024 [Pipeline] } 00:38:54.041 [Pipeline] // dir 00:38:54.046 [Pipeline] } 00:38:54.063 [Pipeline] // wrap 00:38:54.069 [Pipeline] } 00:38:54.085 [Pipeline] // catchError 00:38:54.094 [Pipeline] stage 00:38:54.096 [Pipeline] { (Epilogue) 00:38:54.112 [Pipeline] sh 00:38:54.393 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:00.965 [Pipeline] catchError 00:39:00.967 [Pipeline] { 00:39:00.985 [Pipeline] sh 00:39:01.271 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:01.271 Artifacts sizes are good 00:39:01.281 [Pipeline] } 00:39:01.299 [Pipeline] // catchError 00:39:01.311 [Pipeline] archiveArtifacts 00:39:01.319 Archiving artifacts 00:39:01.499 [Pipeline] cleanWs 00:39:01.522 [WS-CLEANUP] Deleting project workspace... 00:39:01.522 [WS-CLEANUP] Deferred wipeout is used... 00:39:01.528 [WS-CLEANUP] done 00:39:01.530 [Pipeline] } 00:39:01.546 [Pipeline] // stage 00:39:01.551 [Pipeline] } 00:39:01.567 [Pipeline] // node 00:39:01.571 [Pipeline] End of Pipeline 00:39:01.600 Finished: SUCCESS